From nobody Sat Oct 4 06:33:20 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C7C7A30BF6A; Tue, 19 Aug 2025 17:37:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1755625048; cv=none; b=Z0wNQEdAsqsstdyjMG1tY4Sa7F2sBtQJ0yovec07Q+pKbxsbu2yAB5jguwkKf0cJmUJENLJmWTmJOMtGAesZX1YFZINk8kg8v0BAejLXJErGCjmhIILGZHuNm5qtYZ56c62Xp0ej//AF/EeI3vdY79rtycPJc9prY4oAXQ7RIUA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1755625048; c=relaxed/simple; bh=EWgy5B4A+Akm79sn7jh5YTuSXsOOCp8mk7trqVaPDEQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=kYP71G1NcEMltNGUr2k2TQ2gOv38XgTYWJbtsCl8gYjn5Ki8J+GXZqUY1MH6VqKUMikSkFX1lQcN4AINJxUd0K7aemG5rFzzugKduv+WM0W8Tb+zicxxBYLIn9GGNxwNTOFBFTIg/x+6I36kP8M8nDW9Vwxv6riw1jt5Ts0UnoE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=kgbi4w2Q; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="kgbi4w2Q" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 24115C4CEF4; Tue, 19 Aug 2025 17:37:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1755625048; bh=EWgy5B4A+Akm79sn7jh5YTuSXsOOCp8mk7trqVaPDEQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=kgbi4w2Q8/qLfemYkqYCTlJubr4ZTtFUa79DQsgoMOtMnK9XVQtew1W9zkl8AXPTg 5ouT8pZzyp7JtqBQN0iNERXAu/MVItkWE26bNq7tt2PQ51Vb3N7FOZu1tV27k+WDxS rDNXY9cVEdCtmprq96Y70alxWNTWqmdzaHB5D3PY10GcJh034Yri4T9esFwQ2SYYGH 50WIZ3BWvYJhFJMsduRI2K8tz2LgY4zFEmijzvGKJlrZ1fVmLXd1pygDhMYan/xzfr ol/8IHtSLtS1VL0XNmH5RkFrbie3BoNUHMXdpMX5FRAvuLWuKe1gYX4KTtkiPKAqnI c3Ipp+NxzvqvA== From: Leon Romanovsky To: Marek Szyprowski Cc: Leon Romanovsky , Jason Gunthorpe , Abdiel Janulgue , Alexander Potapenko , Alex Gaynor , Andrew Morton , Christoph Hellwig , Danilo Krummrich , iommu@lists.linux.dev, Jason Wang , Jens Axboe , Joerg Roedel , Jonathan Corbet , Juergen Gross , kasan-dev@googlegroups.com, Keith Busch , linux-block@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-nvme@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-trace-kernel@vger.kernel.org, Madhavan Srinivasan , Masami Hiramatsu , Michael Ellerman , "Michael S. Tsirkin" , Miguel Ojeda , Robin Murphy , rust-for-linux@vger.kernel.org, Sagi Grimberg , Stefano Stabellini , Steven Rostedt , virtualization@lists.linux.dev, Will Deacon , xen-devel@lists.xenproject.org Subject: [PATCH v4 01/16] dma-mapping: introduce new DMA attribute to indicate MMIO memory Date: Tue, 19 Aug 2025 20:36:45 +0300 Message-ID: <08e044a00a872932e106f7e27449a8eab2690dbc.1755624249.git.leon@kernel.org> X-Mailer: git-send-email 2.50.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Leon Romanovsky This patch introduces the DMA_ATTR_MMIO attribute to mark DMA buffers that reside in memory-mapped I/O (MMIO) regions, such as device BARs exposed through the host bridge, which are accessible for peer-to-peer (P2P) DMA. This attribute is especially useful for exporting device memory to other devices for DMA without CPU involvement, and avoids unnecessary or potentially detrimental CPU cache maintenance calls. DMA_ATTR_MMIO is supposed to provide dma_map_resource() functionality without need to call to special function and perform branching by the callers. Signed-off-by: Leon Romanovsky Reviewed-by: Jason Gunthorpe --- Documentation/core-api/dma-attributes.rst | 18 ++++++++++++++++++ include/linux/dma-mapping.h | 20 ++++++++++++++++++++ include/trace/events/dma.h | 3 ++- rust/kernel/dma.rs | 3 +++ 4 files changed, 43 insertions(+), 1 deletion(-) diff --git a/Documentation/core-api/dma-attributes.rst b/Documentation/core= -api/dma-attributes.rst index 1887d92e8e92..0bdc2be65e57 100644 --- a/Documentation/core-api/dma-attributes.rst +++ b/Documentation/core-api/dma-attributes.rst @@ -130,3 +130,21 @@ accesses to DMA buffers in both privileged "supervisor= " and unprivileged subsystem that the buffer is fully accessible at the elevated privilege level (and ideally inaccessible or at least read-only at the lesser-privileged levels). + +DMA_ATTR_MMIO +------------- + +This attribute indicates the physical address is not normal system +memory. It may not be used with kmap*()/phys_to_virt()/phys_to_page() +functions, it may not be cacheable, and access using CPU load/store +instructions may not be allowed. + +Usually this will be used to describe MMIO addresses, or other non-cacheab= le +register addresses. When DMA mapping this sort of address we call +the operation Peer to Peer as a one device is DMA'ing to another device. +For PCI devices the p2pdma APIs must be used to determine if +DMA_ATTR_MMIO is appropriate. + +For architectures that require cache flushing for DMA coherence +DMA_ATTR_MMIO will not perform any cache flushing. The address +provided must never be mapped cacheable into the CPU. diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h index 55c03e5fe8cb..4254fd9bdf5d 100644 --- a/include/linux/dma-mapping.h +++ b/include/linux/dma-mapping.h @@ -58,6 +58,26 @@ */ #define DMA_ATTR_PRIVILEGED (1UL << 9) =20 +/* + * DMA_ATTR_MMIO - Indicates memory-mapped I/O (MMIO) region for DMA mappi= ng + * + * This attribute indicates the physical address is not normal system + * memory. It may not be used with kmap*()/phys_to_virt()/phys_to_page() + * functions, it may not be cacheable, and access using CPU load/store + * instructions may not be allowed. + * + * Usually this will be used to describe MMIO addresses, or other non-cach= eable + * register addresses. When DMA mapping this sort of address we call + * the operation Peer to Peer as a one device is DMA'ing to another device. + * For PCI devices the p2pdma APIs must be used to determine if DMA_ATTR_M= MIO + * is appropriate. + * + * For architectures that require cache flushing for DMA coherence + * DMA_ATTR_MMIO will not perform any cache flushing. The address + * provided must never be mapped cacheable into the CPU. + */ +#define DMA_ATTR_MMIO (1UL << 10) + /* * A dma_addr_t can hold any valid DMA or bus address for the platform. I= t can * be given to a device to use as a DMA source or target. It is specific = to a diff --git a/include/trace/events/dma.h b/include/trace/events/dma.h index d8ddc27b6a7c..ee90d6f1dcf3 100644 --- a/include/trace/events/dma.h +++ b/include/trace/events/dma.h @@ -31,7 +31,8 @@ TRACE_DEFINE_ENUM(DMA_NONE); { DMA_ATTR_FORCE_CONTIGUOUS, "FORCE_CONTIGUOUS" }, \ { DMA_ATTR_ALLOC_SINGLE_PAGES, "ALLOC_SINGLE_PAGES" }, \ { DMA_ATTR_NO_WARN, "NO_WARN" }, \ - { DMA_ATTR_PRIVILEGED, "PRIVILEGED" }) + { DMA_ATTR_PRIVILEGED, "PRIVILEGED" }, \ + { DMA_ATTR_MMIO, "MMIO" }) =20 DECLARE_EVENT_CLASS(dma_map, TP_PROTO(struct device *dev, phys_addr_t phys_addr, dma_addr_t dma_addr, diff --git a/rust/kernel/dma.rs b/rust/kernel/dma.rs index 2bc8ab51ec28..61d9eed7a786 100644 --- a/rust/kernel/dma.rs +++ b/rust/kernel/dma.rs @@ -242,6 +242,9 @@ pub mod attrs { /// Indicates that the buffer is fully accessible at an elevated privi= lege level (and /// ideally inaccessible or at least read-only at lesser-privileged le= vels). pub const DMA_ATTR_PRIVILEGED: Attrs =3D Attrs(bindings::DMA_ATTR_PRIV= ILEGED); + + /// Indicates that the buffer is MMIO memory. + pub const DMA_ATTR_MMIO: Attrs =3D Attrs(bindings::DMA_ATTR_MMIO); } =20 /// An abstraction of the `dma_alloc_coherent` API. --=20 2.50.1 From nobody Sat Oct 4 06:33:20 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C902E30C35E; Tue, 19 Aug 2025 17:37:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1755625061; cv=none; b=Cp0dIaKFF9oQySjZTvp+UxekgTn9ea2zydHI2n3pWtsvXicBHGdrh1oChzpmCqHdGu51zdMizsiAIWSAw+zWTE10PMAWqvm9sxOEpNLT9u5bcpYWkI17WL+ICk52n/HikKxWmKjIfazv30WBTpssiE0h31aNngPOsYbOc2OuAq8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1755625061; c=relaxed/simple; bh=mXPBKGOTAXMaA8YnIACrT4VeK6ColaZH6f/lOi0T5+M=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=ljGw7L9DuJoKwqzMltyoDkvaBzpeEsaIA/Du+eqCFcnC/bgzNdDX+y26qaZ5xItTIWLRYxoHlT2o8S3Bx1ClPsMJ7UZKsT/YQ4yV720OvbsyGt4vJX+6xZnTpx2rwKHCr0cgCMt/75iZ677kWiPT+mvWoqCY8jE52koalG/y/SQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=ZcfXQzEM; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="ZcfXQzEM" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 53FEBC116B1; Tue, 19 Aug 2025 17:37:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1755625061; bh=mXPBKGOTAXMaA8YnIACrT4VeK6ColaZH6f/lOi0T5+M=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ZcfXQzEMtP4+AZ86jo3dtO0Oekoz5PxnIpxVGiM0bP5sSs9WMxZsIT5kYk/VnUi3R /kc0Vzr2i3fBfcwwoPTgAmdyvW5ST0B2y1uv9wp+i8hWG5QgN6gN0jrKDriRJyNDRP 26rMrCRUHHKb1TjV3vj8pKsHfYCF8VeZXtu9+b4Vsvi65u3DslqBcB4jsQub3Gt2qD KJFY83CBFYAED4nL0OxEGnoJhmf5HiiRJ0eyTbwf76fL/xpbF4kq0K7Xr5ssJRHfaG Znxa2J61vk0Pu5GfNVrDFyX+mri+BrC7qwdiENTcRxzxBPp/BwoEirtjDV08OZLE7g 87wPEgWDV7Xzg== From: Leon Romanovsky To: Marek Szyprowski Cc: Leon Romanovsky , Jason Gunthorpe , Abdiel Janulgue , Alexander Potapenko , Alex Gaynor , Andrew Morton , Christoph Hellwig , Danilo Krummrich , iommu@lists.linux.dev, Jason Wang , Jens Axboe , Joerg Roedel , Jonathan Corbet , Juergen Gross , kasan-dev@googlegroups.com, Keith Busch , linux-block@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-nvme@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-trace-kernel@vger.kernel.org, Madhavan Srinivasan , Masami Hiramatsu , Michael Ellerman , "Michael S. Tsirkin" , Miguel Ojeda , Robin Murphy , rust-for-linux@vger.kernel.org, Sagi Grimberg , Stefano Stabellini , Steven Rostedt , virtualization@lists.linux.dev, Will Deacon , xen-devel@lists.xenproject.org Subject: [PATCH v4 02/16] iommu/dma: implement DMA_ATTR_MMIO for dma_iova_link(). Date: Tue, 19 Aug 2025 20:36:46 +0300 Message-ID: <62d9a6c3ca03037631f6d0640ebec5fbac41d547.1755624249.git.leon@kernel.org> X-Mailer: git-send-email 2.50.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Leon Romanovsky This will replace the hacky use of DMA_ATTR_SKIP_CPU_SYNC to avoid touching the possibly non-KVA MMIO memory. Also correct the incorrect caching attribute for the IOMMU, MMIO memory should not be cachable inside the IOMMU mapping or it can possibly create system problems. Set IOMMU_MMIO for DMA_ATTR_MMIO. Reviewed-by: Jason Gunthorpe Signed-off-by: Leon Romanovsky --- drivers/iommu/dma-iommu.c | 18 ++++++++++++++---- 1 file changed, 14 insertions(+), 4 deletions(-) diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index ea2ef53bd4fe..e1185ba73e23 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -724,7 +724,12 @@ static int iommu_dma_init_domain(struct iommu_domain *= domain, struct device *dev static int dma_info_to_prot(enum dma_data_direction dir, bool coherent, unsigned long attrs) { - int prot =3D coherent ? IOMMU_CACHE : 0; + int prot; + + if (attrs & DMA_ATTR_MMIO) + prot =3D IOMMU_MMIO; + else + prot =3D coherent ? IOMMU_CACHE : 0; =20 if (attrs & DMA_ATTR_PRIVILEGED) prot |=3D IOMMU_PRIV; @@ -1838,12 +1843,13 @@ static int __dma_iova_link(struct device *dev, dma_= addr_t addr, unsigned long attrs) { bool coherent =3D dev_is_dma_coherent(dev); + int prot =3D dma_info_to_prot(dir, coherent, attrs); =20 - if (!coherent && !(attrs & DMA_ATTR_SKIP_CPU_SYNC)) + if (!coherent && !(attrs & (DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_MMIO))) arch_sync_dma_for_device(phys, size, dir); =20 return iommu_map_nosync(iommu_get_dma_domain(dev), addr, phys, size, - dma_info_to_prot(dir, coherent, attrs), GFP_ATOMIC); + prot, GFP_ATOMIC); } =20 static int iommu_dma_iova_bounce_and_link(struct device *dev, dma_addr_t a= ddr, @@ -1949,9 +1955,13 @@ int dma_iova_link(struct device *dev, struct dma_iov= a_state *state, return -EIO; =20 if (dev_use_swiotlb(dev, size, dir) && - iova_unaligned(iovad, phys, size)) + iova_unaligned(iovad, phys, size)) { + if (attrs & DMA_ATTR_MMIO) + return -EPERM; + return iommu_dma_iova_link_swiotlb(dev, state, phys, offset, size, dir, attrs); + } =20 return __dma_iova_link(dev, state->addr + offset - iova_start_pad, phys - iova_start_pad, --=20 2.50.1 From nobody Sat Oct 4 06:33:20 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EB5FF30C35E; Tue, 19 Aug 2025 17:37:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1755625056; cv=none; b=juPz8xcHqSNUTY0BlVKdRQ098vfreBbBFlgoLav2BEZts/Ss/fm8tFft14SBN4wn5t19y7MZDhNhAKDzninRLzWsbPStYtFovVRMTvTy0fcKV+cA4OvnhKV/WPuZuDauUjA5DCrLcqV2KYDqiyfSRToN9DSNrECbYQIRQmjF0vs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1755625056; c=relaxed/simple; bh=+6SuvKk/7U//cW4wXi0ADVlYqazqjfxTWHubS42t1as=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=MXDTxb1JTRpJQk3jxS4F1RnZl6A5XRCOcZkIh5o8vPUu3BaoVcoNoMk+o5Q/RBmwm/FEbVd1VpSxS3mGj5QcwEaboJrPWNLM/cjZ+LYdZSUx1DuE6I3WxgyLpKhs9aoYaOE4Di3QelN66r+2sNVLTUZpWRoem8zna+8A2WrlLTM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=nRK2iAcw; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="nRK2iAcw" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 1EBFAC4CEF4; Tue, 19 Aug 2025 17:37:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1755625054; bh=+6SuvKk/7U//cW4wXi0ADVlYqazqjfxTWHubS42t1as=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=nRK2iAcw9naXxqVCwrZYTg0L0tJVO6Z8uYDGDwR7RfbMKGUm6C1t5jhQLddy+0Enj nh3J3isWu7Zd6NLSnDbBHV9WCIoJ+nc0jxpVf8VSDaWdIXAnmpF5ICK0WXbRS5d8cv 8Vrlhstq8V9i+QaI/1n0oXTMXFX0BmUBAQxY2YwE9ujB5cQHDGmVxVpwXuHElC5fEa Y5GtQScvK8ImAhambk4WLChGvqvr9FrlV8TMmlZ4IwV6koBzxY/i+8uGhj3kbJs4Gu KVpki2bklNgxb/LEU2SZVxE1nNnUsBg4ZjpcimliacFY+BJFISpyvgWuUeq0efkjN9 BVZdeXsQkCpzQ== From: Leon Romanovsky To: Marek Szyprowski Cc: Leon Romanovsky , Jason Gunthorpe , Abdiel Janulgue , Alexander Potapenko , Alex Gaynor , Andrew Morton , Christoph Hellwig , Danilo Krummrich , iommu@lists.linux.dev, Jason Wang , Jens Axboe , Joerg Roedel , Jonathan Corbet , Juergen Gross , kasan-dev@googlegroups.com, Keith Busch , linux-block@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-nvme@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-trace-kernel@vger.kernel.org, Madhavan Srinivasan , Masami Hiramatsu , Michael Ellerman , "Michael S. Tsirkin" , Miguel Ojeda , Robin Murphy , rust-for-linux@vger.kernel.org, Sagi Grimberg , Stefano Stabellini , Steven Rostedt , virtualization@lists.linux.dev, Will Deacon , xen-devel@lists.xenproject.org Subject: [PATCH v4 03/16] dma-debug: refactor to use physical addresses for page mapping Date: Tue, 19 Aug 2025 20:36:47 +0300 Message-ID: <478d5b7135008b3c82f100faa9d3830839fc6562.1755624249.git.leon@kernel.org> X-Mailer: git-send-email 2.50.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Leon Romanovsky Convert the DMA debug infrastructure from page-based to physical address-ba= sed mapping as a preparation to rely on physical address for DMA mapping routin= es. The refactoring renames debug_dma_map_page() to debug_dma_map_phys() and changes its signature to accept a phys_addr_t parameter instead of struct p= age and offset. Similarly, debug_dma_unmap_page() becomes debug_dma_unmap_phys(= ). A new dma_debug_phy type is introduced to distinguish physical address mapp= ings from other debug entry types. All callers throughout the codebase are updat= ed to pass physical addresses directly, eliminating the need for page-to-physi= cal conversion in the debug layer. This refactoring eliminates the need to convert between page pointers and physical addresses in the debug layer, making the code more efficient and consistent with the DMA mapping API's physical address focus. Reviewed-by: Jason Gunthorpe Signed-off-by: Leon Romanovsky --- Documentation/core-api/dma-api.rst | 4 ++-- kernel/dma/debug.c | 28 +++++++++++++++++----------- kernel/dma/debug.h | 16 +++++++--------- kernel/dma/mapping.c | 15 ++++++++------- 4 files changed, 34 insertions(+), 29 deletions(-) diff --git a/Documentation/core-api/dma-api.rst b/Documentation/core-api/dm= a-api.rst index 3087bea715ed..ca75b3541679 100644 --- a/Documentation/core-api/dma-api.rst +++ b/Documentation/core-api/dma-api.rst @@ -761,7 +761,7 @@ example warning message may look like this:: [] find_busiest_group+0x207/0x8a0 [] _spin_lock_irqsave+0x1f/0x50 [] check_unmap+0x203/0x490 - [] debug_dma_unmap_page+0x49/0x50 + [] debug_dma_unmap_phys+0x49/0x50 [] nv_tx_done_optimized+0xc6/0x2c0 [] nv_nic_irq_optimized+0x73/0x2b0 [] handle_IRQ_event+0x34/0x70 @@ -855,7 +855,7 @@ that a driver may be leaking mappings. dma-debug interface debug_dma_mapping_error() to debug drivers that fail to check DMA mapping errors on addresses returned by dma_map_single() and dma_map_page() interfaces. This interface clears a flag set by -debug_dma_map_page() to indicate that dma_mapping_error() has been called = by +debug_dma_map_phys() to indicate that dma_mapping_error() has been called = by the driver. When driver does unmap, debug_dma_unmap() checks the flag and = if this flag is still set, prints warning message that includes call trace th= at leads up to the unmap. This interface can be called from dma_mapping_error= () diff --git a/kernel/dma/debug.c b/kernel/dma/debug.c index e43c6de2bce4..da6734e3a4ce 100644 --- a/kernel/dma/debug.c +++ b/kernel/dma/debug.c @@ -39,6 +39,7 @@ enum { dma_debug_sg, dma_debug_coherent, dma_debug_resource, + dma_debug_phy, }; =20 enum map_err_types { @@ -141,6 +142,7 @@ static const char *type2name[] =3D { [dma_debug_sg] =3D "scatter-gather", [dma_debug_coherent] =3D "coherent", [dma_debug_resource] =3D "resource", + [dma_debug_phy] =3D "phy", }; =20 static const char *dir2name[] =3D { @@ -1201,9 +1203,8 @@ void debug_dma_map_single(struct device *dev, const v= oid *addr, } EXPORT_SYMBOL(debug_dma_map_single); =20 -void debug_dma_map_page(struct device *dev, struct page *page, size_t offs= et, - size_t size, int direction, dma_addr_t dma_addr, - unsigned long attrs) +void debug_dma_map_phys(struct device *dev, phys_addr_t phys, size_t size, + int direction, dma_addr_t dma_addr, unsigned long attrs) { struct dma_debug_entry *entry; =20 @@ -1218,19 +1219,24 @@ void debug_dma_map_page(struct device *dev, struct = page *page, size_t offset, return; =20 entry->dev =3D dev; - entry->type =3D dma_debug_single; - entry->paddr =3D page_to_phys(page) + offset; + entry->type =3D dma_debug_phy; + entry->paddr =3D phys; entry->dev_addr =3D dma_addr; entry->size =3D size; entry->direction =3D direction; entry->map_err_type =3D MAP_ERR_NOT_CHECKED; =20 - check_for_stack(dev, page, offset); + if (!(attrs & DMA_ATTR_MMIO)) { + struct page *page =3D phys_to_page(phys); + size_t offset =3D offset_in_page(page); =20 - if (!PageHighMem(page)) { - void *addr =3D page_address(page) + offset; + check_for_stack(dev, page, offset); =20 - check_for_illegal_area(dev, addr, size); + if (!PageHighMem(page)) { + void *addr =3D page_address(page) + offset; + + check_for_illegal_area(dev, addr, size); + } } =20 add_dma_entry(entry, attrs); @@ -1274,11 +1280,11 @@ void debug_dma_mapping_error(struct device *dev, dm= a_addr_t dma_addr) } EXPORT_SYMBOL(debug_dma_mapping_error); =20 -void debug_dma_unmap_page(struct device *dev, dma_addr_t dma_addr, +void debug_dma_unmap_phys(struct device *dev, dma_addr_t dma_addr, size_t size, int direction) { struct dma_debug_entry ref =3D { - .type =3D dma_debug_single, + .type =3D dma_debug_phy, .dev =3D dev, .dev_addr =3D dma_addr, .size =3D size, diff --git a/kernel/dma/debug.h b/kernel/dma/debug.h index f525197d3cae..76adb42bffd5 100644 --- a/kernel/dma/debug.h +++ b/kernel/dma/debug.h @@ -9,12 +9,11 @@ #define _KERNEL_DMA_DEBUG_H =20 #ifdef CONFIG_DMA_API_DEBUG -extern void debug_dma_map_page(struct device *dev, struct page *page, - size_t offset, size_t size, - int direction, dma_addr_t dma_addr, +extern void debug_dma_map_phys(struct device *dev, phys_addr_t phys, + size_t size, int direction, dma_addr_t dma_addr, unsigned long attrs); =20 -extern void debug_dma_unmap_page(struct device *dev, dma_addr_t addr, +extern void debug_dma_unmap_phys(struct device *dev, dma_addr_t addr, size_t size, int direction); =20 extern void debug_dma_map_sg(struct device *dev, struct scatterlist *sg, @@ -55,14 +54,13 @@ extern void debug_dma_sync_sg_for_device(struct device = *dev, struct scatterlist *sg, int nelems, int direction); #else /* CONFIG_DMA_API_DEBUG */ -static inline void debug_dma_map_page(struct device *dev, struct page *pag= e, - size_t offset, size_t size, - int direction, dma_addr_t dma_addr, - unsigned long attrs) +static inline void debug_dma_map_phys(struct device *dev, phys_addr_t phys, + size_t size, int direction, + dma_addr_t dma_addr, unsigned long attrs) { } =20 -static inline void debug_dma_unmap_page(struct device *dev, dma_addr_t add= r, +static inline void debug_dma_unmap_phys(struct device *dev, dma_addr_t add= r, size_t size, int direction) { } diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c index 107e4a4d251d..4c1dfbabb8ae 100644 --- a/kernel/dma/mapping.c +++ b/kernel/dma/mapping.c @@ -157,6 +157,7 @@ dma_addr_t dma_map_page_attrs(struct device *dev, struc= t page *page, unsigned long attrs) { const struct dma_map_ops *ops =3D get_dma_ops(dev); + phys_addr_t phys =3D page_to_phys(page) + offset; dma_addr_t addr; =20 BUG_ON(!valid_dma_direction(dir)); @@ -165,16 +166,15 @@ dma_addr_t dma_map_page_attrs(struct device *dev, str= uct page *page, return DMA_MAPPING_ERROR; =20 if (dma_map_direct(dev, ops) || - arch_dma_map_page_direct(dev, page_to_phys(page) + offset + size)) + arch_dma_map_page_direct(dev, phys + size)) addr =3D dma_direct_map_page(dev, page, offset, size, dir, attrs); else if (use_dma_iommu(dev)) addr =3D iommu_dma_map_page(dev, page, offset, size, dir, attrs); else addr =3D ops->map_page(dev, page, offset, size, dir, attrs); kmsan_handle_dma(page, offset, size, dir); - trace_dma_map_page(dev, page_to_phys(page) + offset, addr, size, dir, - attrs); - debug_dma_map_page(dev, page, offset, size, dir, addr, attrs); + trace_dma_map_page(dev, phys, addr, size, dir, attrs); + debug_dma_map_phys(dev, phys, size, dir, addr, attrs); =20 return addr; } @@ -194,7 +194,7 @@ void dma_unmap_page_attrs(struct device *dev, dma_addr_= t addr, size_t size, else ops->unmap_page(dev, addr, size, dir, attrs); trace_dma_unmap_page(dev, addr, size, dir, attrs); - debug_dma_unmap_page(dev, addr, size, dir); + debug_dma_unmap_phys(dev, addr, size, dir); } EXPORT_SYMBOL(dma_unmap_page_attrs); =20 @@ -712,7 +712,8 @@ struct page *dma_alloc_pages(struct device *dev, size_t= size, if (page) { trace_dma_alloc_pages(dev, page_to_virt(page), *dma_handle, size, dir, gfp, 0); - debug_dma_map_page(dev, page, 0, size, dir, *dma_handle, 0); + debug_dma_map_phys(dev, page_to_phys(page), size, dir, + *dma_handle, 0); } else { trace_dma_alloc_pages(dev, NULL, 0, size, dir, gfp, 0); } @@ -738,7 +739,7 @@ void dma_free_pages(struct device *dev, size_t size, st= ruct page *page, dma_addr_t dma_handle, enum dma_data_direction dir) { trace_dma_free_pages(dev, page_to_virt(page), dma_handle, size, dir, 0); - debug_dma_unmap_page(dev, dma_handle, size, dir); + debug_dma_unmap_phys(dev, dma_handle, size, dir); __dma_free_pages(dev, size, page, dma_handle, dir); } EXPORT_SYMBOL_GPL(dma_free_pages); --=20 2.50.1 From nobody Sat Oct 4 06:33:20 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5A13C2E2297; Tue, 19 Aug 2025 17:37:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1755625067; cv=none; b=KOcV1jm1wNlbarOjvI0NwA0fe/2Ajk0wINtFnNWEsFv6emOUC2CB+nWzp8rdbb2OoCXL/CndG7a/YwNd/Nzfb/rBuYXHS+9a92Qe7V5asTSGZbwNcr9XpuiGJBIJ3DGe+W5nYlYAPbypVV3Dim073NR2NzQp1wd9S2UBX/Z0s+c= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1755625067; c=relaxed/simple; bh=/KA5/iirzA+7NBo2WhreWFarQonN7Bxg4iw8kozROnk=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=d+WFn0PtAcHu2DZxCVbjeHVBrbqazOKFUolzO57X36fd3vsU36RODCGOekdUcvTYOTw7DXpYqeZ5PO1CxvK3G86dgSIlBPwN8jj191Aw76aiHTSPTgIOrSxq0tOI6V/ZwukwhjanVI8qZQIQ6UYpgQepESlzMmBrvfHuEh6DU5M= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=OoY3cFH2; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="OoY3cFH2" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 20E31C4CEF4; Tue, 19 Aug 2025 17:37:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1755625067; bh=/KA5/iirzA+7NBo2WhreWFarQonN7Bxg4iw8kozROnk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=OoY3cFH2dl227qnLV9qdRahXTo4iRGaE1ejAUo1nffsJP0oy41x+3H7L4v7i2x+3n xubqfjQVTIQTYC5r+ENf4+KGcCnAUVsLcgjhKtSG7OSOwipn6OBplJzxmTdzuG0M9E nOqACnOTvFp/m3vySJgCvla7yn84iC8kCcZYBJMh4R+MZ/b74GrWwrWb+iQMHKPL+t NgcwvdawK6zK2SX4ql7GcDqU9F51bDD3j3M2HcQm4mmg1VmtQYwjFoitOFxvUHKbl3 rqW/Lkb1w2tnkNmZBZDtXk0WCZ8okhRrV49XpXjkU1fgNjQ27YX9a0l0afVRahDzUy VvDsrkwd9UrzA== From: Leon Romanovsky To: Marek Szyprowski Cc: Leon Romanovsky , Jason Gunthorpe , Abdiel Janulgue , Alexander Potapenko , Alex Gaynor , Andrew Morton , Christoph Hellwig , Danilo Krummrich , iommu@lists.linux.dev, Jason Wang , Jens Axboe , Joerg Roedel , Jonathan Corbet , Juergen Gross , kasan-dev@googlegroups.com, Keith Busch , linux-block@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-nvme@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-trace-kernel@vger.kernel.org, Madhavan Srinivasan , Masami Hiramatsu , Michael Ellerman , "Michael S. Tsirkin" , Miguel Ojeda , Robin Murphy , rust-for-linux@vger.kernel.org, Sagi Grimberg , Stefano Stabellini , Steven Rostedt , virtualization@lists.linux.dev, Will Deacon , xen-devel@lists.xenproject.org Subject: [PATCH v4 04/16] dma-mapping: rename trace_dma_*map_page to trace_dma_*map_phys Date: Tue, 19 Aug 2025 20:36:48 +0300 Message-ID: X-Mailer: git-send-email 2.50.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Leon Romanovsky As a preparation for following map_page -> map_phys API conversion, let's rename trace_dma_*map_page() to be trace_dma_*map_phys(). Signed-off-by: Leon Romanovsky Reviewed-by: Jason Gunthorpe --- include/trace/events/dma.h | 4 ++-- kernel/dma/mapping.c | 4 ++-- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/include/trace/events/dma.h b/include/trace/events/dma.h index ee90d6f1dcf3..84416c7d6bfa 100644 --- a/include/trace/events/dma.h +++ b/include/trace/events/dma.h @@ -72,7 +72,7 @@ DEFINE_EVENT(dma_map, name, \ size_t size, enum dma_data_direction dir, unsigned long attrs), \ TP_ARGS(dev, phys_addr, dma_addr, size, dir, attrs)) =20 -DEFINE_MAP_EVENT(dma_map_page); +DEFINE_MAP_EVENT(dma_map_phys); DEFINE_MAP_EVENT(dma_map_resource); =20 DECLARE_EVENT_CLASS(dma_unmap, @@ -110,7 +110,7 @@ DEFINE_EVENT(dma_unmap, name, \ enum dma_data_direction dir, unsigned long attrs), \ TP_ARGS(dev, addr, size, dir, attrs)) =20 -DEFINE_UNMAP_EVENT(dma_unmap_page); +DEFINE_UNMAP_EVENT(dma_unmap_phys); DEFINE_UNMAP_EVENT(dma_unmap_resource); =20 DECLARE_EVENT_CLASS(dma_alloc_class, diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c index 4c1dfbabb8ae..fe1f0da6dc50 100644 --- a/kernel/dma/mapping.c +++ b/kernel/dma/mapping.c @@ -173,7 +173,7 @@ dma_addr_t dma_map_page_attrs(struct device *dev, struc= t page *page, else addr =3D ops->map_page(dev, page, offset, size, dir, attrs); kmsan_handle_dma(page, offset, size, dir); - trace_dma_map_page(dev, phys, addr, size, dir, attrs); + trace_dma_map_phys(dev, phys, addr, size, dir, attrs); debug_dma_map_phys(dev, phys, size, dir, addr, attrs); =20 return addr; @@ -193,7 +193,7 @@ void dma_unmap_page_attrs(struct device *dev, dma_addr_= t addr, size_t size, iommu_dma_unmap_page(dev, addr, size, dir, attrs); else ops->unmap_page(dev, addr, size, dir, attrs); - trace_dma_unmap_page(dev, addr, size, dir, attrs); + trace_dma_unmap_phys(dev, addr, size, dir, attrs); debug_dma_unmap_phys(dev, addr, size, dir); } EXPORT_SYMBOL(dma_unmap_page_attrs); --=20 2.50.1 From nobody Sat Oct 4 06:33:20 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AAFFB30DEC7; Tue, 19 Aug 2025 17:37:59 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1755625079; cv=none; b=fiw7AF12MwIpbsAvxQZm2MS4kC/FSpjkbEZPMK+KOEBPnj59RnRnttJhDGZavhWxmVk8K9TTehwxlvoWd8SGcfReQS0Ob5ze8dPjmzuXHLx4bcqVFjxTx99yYdENo1ejcBSrbfUB+toDnJDRCMxjkdntjyMm7YFgBnib+NjMIN0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1755625079; c=relaxed/simple; bh=0MhcGdVdXvKtqKaQ8KaQcBZJnClPr0/vCjOFn/qCf0g=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=qXY3HC46Q002IgQb7v3PkxP7QdVZ+FdDNHDO83kulCYLztJCr2SKu9dk5prp0UaWmOaK547VKlMcCUCVFNYXsjUM12JLAxmdONcjpWa2K3om43oUk7lXur7i21Pym+UVYO/p+K2zB7m5D6RPAr4EmyqJUP+ggD4ul7Csr0BMlQo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=MouP3Mq9; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="MouP3Mq9" Received: by smtp.kernel.org (Postfix) with ESMTPSA id E64ECC116D0; Tue, 19 Aug 2025 17:37:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1755625079; bh=0MhcGdVdXvKtqKaQ8KaQcBZJnClPr0/vCjOFn/qCf0g=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=MouP3Mq9M8VNSNcbEF4xEUjffk0Xzg4jGWqmsxG9iYcCqntVe/Pra1WM/OxT1i1Od Aqi9aVBPbjVG96dIo5N5sOwNqlHm0xNRYmPytNGAePyWcADWp6NFOTVh93u4yekuvo BDRr7Gj4XsLIJlrnY9jCLDXfSVxrgkDw+nSJXtT6ZqXHvw7L2bv3qnt948dTovXsa/ uxIFxcEgqXl4qbxMLy+2MwPZUpjLU1sqX6ZOTX60mm9/q3qMX2In4Sn81lZs6DgRgW rcsL+GG13KPKOW+YPm37p0kWoUppDQCXi1rzzvUpGo7tZHKA9IUUOQMyoCKJOrAcvx i6WFhL6htpvlA== From: Leon Romanovsky To: Marek Szyprowski Cc: Leon Romanovsky , Jason Gunthorpe , Abdiel Janulgue , Alexander Potapenko , Alex Gaynor , Andrew Morton , Christoph Hellwig , Danilo Krummrich , iommu@lists.linux.dev, Jason Wang , Jens Axboe , Joerg Roedel , Jonathan Corbet , Juergen Gross , kasan-dev@googlegroups.com, Keith Busch , linux-block@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-nvme@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-trace-kernel@vger.kernel.org, Madhavan Srinivasan , Masami Hiramatsu , Michael Ellerman , "Michael S. Tsirkin" , Miguel Ojeda , Robin Murphy , rust-for-linux@vger.kernel.org, Sagi Grimberg , Stefano Stabellini , Steven Rostedt , virtualization@lists.linux.dev, Will Deacon , xen-devel@lists.xenproject.org Subject: [PATCH v4 05/16] iommu/dma: rename iommu_dma_*map_page to iommu_dma_*map_phys Date: Tue, 19 Aug 2025 20:36:49 +0300 Message-ID: <66e7cc6854e4e40278b598b38e0c4d49d7fcec91.1755624249.git.leon@kernel.org> X-Mailer: git-send-email 2.50.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Leon Romanovsky Rename the IOMMU DMA mapping functions to better reflect their actual calling convention. The functions iommu_dma_map_page() and iommu_dma_unmap_page() are renamed to iommu_dma_map_phys() and iommu_dma_unmap_phys() respectively, as they already operate on physical addresses rather than page structures. The calling convention changes from accepting (struct page *page, unsigned long offset) to (phys_addr_t phys), which eliminates the need for page-to-physical address conversion within the functions. This renaming prepares for the broader DMA API conversion from page-based to physical address-based mapping throughout the kernel. All callers are updated to pass physical addresses directly, including dma_map_page_attrs(), scatterlist mapping functions, and DMA page allocation helpers. The change simplifies the code by removing the page_to_phys() + offset calculation that was previously done inside the IOMMU functions. Signed-off-by: Leon Romanovsky Reviewed-by: Jason Gunthorpe --- drivers/iommu/dma-iommu.c | 14 ++++++-------- include/linux/iommu-dma.h | 7 +++---- kernel/dma/mapping.c | 4 ++-- kernel/dma/ops_helpers.c | 6 +++--- 4 files changed, 14 insertions(+), 17 deletions(-) diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index e1185ba73e23..aea119f32f96 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -1195,11 +1195,9 @@ static inline size_t iova_unaligned(struct iova_doma= in *iovad, phys_addr_t phys, return iova_offset(iovad, phys | size); } =20 -dma_addr_t iommu_dma_map_page(struct device *dev, struct page *page, - unsigned long offset, size_t size, enum dma_data_direction dir, - unsigned long attrs) +dma_addr_t iommu_dma_map_phys(struct device *dev, phys_addr_t phys, size_t= size, + enum dma_data_direction dir, unsigned long attrs) { - phys_addr_t phys =3D page_to_phys(page) + offset; bool coherent =3D dev_is_dma_coherent(dev); int prot =3D dma_info_to_prot(dir, coherent, attrs); struct iommu_domain *domain =3D iommu_get_dma_domain(dev); @@ -1227,7 +1225,7 @@ dma_addr_t iommu_dma_map_page(struct device *dev, str= uct page *page, return iova; } =20 -void iommu_dma_unmap_page(struct device *dev, dma_addr_t dma_handle, +void iommu_dma_unmap_phys(struct device *dev, dma_addr_t dma_handle, size_t size, enum dma_data_direction dir, unsigned long attrs) { struct iommu_domain *domain =3D iommu_get_dma_domain(dev); @@ -1346,7 +1344,7 @@ static void iommu_dma_unmap_sg_swiotlb(struct device = *dev, struct scatterlist *s int i; =20 for_each_sg(sg, s, nents, i) - iommu_dma_unmap_page(dev, sg_dma_address(s), + iommu_dma_unmap_phys(dev, sg_dma_address(s), sg_dma_len(s), dir, attrs); } =20 @@ -1359,8 +1357,8 @@ static int iommu_dma_map_sg_swiotlb(struct device *de= v, struct scatterlist *sg, sg_dma_mark_swiotlb(sg); =20 for_each_sg(sg, s, nents, i) { - sg_dma_address(s) =3D iommu_dma_map_page(dev, sg_page(s), - s->offset, s->length, dir, attrs); + sg_dma_address(s) =3D iommu_dma_map_phys(dev, sg_phys(s), + s->length, dir, attrs); if (sg_dma_address(s) =3D=3D DMA_MAPPING_ERROR) goto out_unmap; sg_dma_len(s) =3D s->length; diff --git a/include/linux/iommu-dma.h b/include/linux/iommu-dma.h index 508beaa44c39..485bdffed988 100644 --- a/include/linux/iommu-dma.h +++ b/include/linux/iommu-dma.h @@ -21,10 +21,9 @@ static inline bool use_dma_iommu(struct device *dev) } #endif /* CONFIG_IOMMU_DMA */ =20 -dma_addr_t iommu_dma_map_page(struct device *dev, struct page *page, - unsigned long offset, size_t size, enum dma_data_direction dir, - unsigned long attrs); -void iommu_dma_unmap_page(struct device *dev, dma_addr_t dma_handle, +dma_addr_t iommu_dma_map_phys(struct device *dev, phys_addr_t phys, size_t= size, + enum dma_data_direction dir, unsigned long attrs); +void iommu_dma_unmap_phys(struct device *dev, dma_addr_t dma_handle, size_t size, enum dma_data_direction dir, unsigned long attrs); int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg, int nents, enum dma_data_direction dir, unsigned long attrs); diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c index fe1f0da6dc50..58482536db9b 100644 --- a/kernel/dma/mapping.c +++ b/kernel/dma/mapping.c @@ -169,7 +169,7 @@ dma_addr_t dma_map_page_attrs(struct device *dev, struc= t page *page, arch_dma_map_page_direct(dev, phys + size)) addr =3D dma_direct_map_page(dev, page, offset, size, dir, attrs); else if (use_dma_iommu(dev)) - addr =3D iommu_dma_map_page(dev, page, offset, size, dir, attrs); + addr =3D iommu_dma_map_phys(dev, phys, size, dir, attrs); else addr =3D ops->map_page(dev, page, offset, size, dir, attrs); kmsan_handle_dma(page, offset, size, dir); @@ -190,7 +190,7 @@ void dma_unmap_page_attrs(struct device *dev, dma_addr_= t addr, size_t size, arch_dma_unmap_page_direct(dev, addr + size)) dma_direct_unmap_page(dev, addr, size, dir, attrs); else if (use_dma_iommu(dev)) - iommu_dma_unmap_page(dev, addr, size, dir, attrs); + iommu_dma_unmap_phys(dev, addr, size, dir, attrs); else ops->unmap_page(dev, addr, size, dir, attrs); trace_dma_unmap_phys(dev, addr, size, dir, attrs); diff --git a/kernel/dma/ops_helpers.c b/kernel/dma/ops_helpers.c index 9afd569eadb9..6f9d604d9d40 100644 --- a/kernel/dma/ops_helpers.c +++ b/kernel/dma/ops_helpers.c @@ -72,8 +72,8 @@ struct page *dma_common_alloc_pages(struct device *dev, s= ize_t size, return NULL; =20 if (use_dma_iommu(dev)) - *dma_handle =3D iommu_dma_map_page(dev, page, 0, size, dir, - DMA_ATTR_SKIP_CPU_SYNC); + *dma_handle =3D iommu_dma_map_phys(dev, page_to_phys(page), size, + dir, DMA_ATTR_SKIP_CPU_SYNC); else *dma_handle =3D ops->map_page(dev, page, 0, size, dir, DMA_ATTR_SKIP_CPU_SYNC); @@ -92,7 +92,7 @@ void dma_common_free_pages(struct device *dev, size_t siz= e, struct page *page, const struct dma_map_ops *ops =3D get_dma_ops(dev); =20 if (use_dma_iommu(dev)) - iommu_dma_unmap_page(dev, dma_handle, size, dir, + iommu_dma_unmap_phys(dev, dma_handle, size, dir, DMA_ATTR_SKIP_CPU_SYNC); else if (ops->unmap_page) ops->unmap_page(dev, dma_handle, size, dir, --=20 2.50.1 From nobody Sat Oct 4 06:33:20 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AE60730C34A; Tue, 19 Aug 2025 17:37:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1755625073; cv=none; b=NCWnwkaLK4JJbfnIhshzCbVkgZ8PY3jyogNvNMDj1EwYK4qD8Y2ilsEqhOPIRVr5biD9F4ddmv1G5ZHX4NT4iQjmlU6ND7KFlZbWpWIg1JDCF8Sxne26ClSgTjrZeO93/wD1eyt9d8F3Nn73BQ9owtMv3x23Wrj95wzweob5l9o= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1755625073; c=relaxed/simple; bh=dJuAJRFwFKXIJkybNGJNx5df3xqMYXp7O5ndEMDZg04=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=AOSb/M9EtGqwKXSPzaLmVgFu2KcLz/yNJpdjHIY2tPjI0obQDJZ6ehfpdwJDMjLv8yAtrOmpeYVSTJ3MR7DvnFU+KkU93zLBV4oOdAulstdPYIYAJ6Q92XxaoAiCOh82asVxjQxOI7VcgbwAs5gi42VF/9lge/ZYit52WCPwR78= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=KCfTK4M1; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="KCfTK4M1" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 16E9AC4CEF4; Tue, 19 Aug 2025 17:37:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1755625073; bh=dJuAJRFwFKXIJkybNGJNx5df3xqMYXp7O5ndEMDZg04=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=KCfTK4M16fZcS3idu4B2UtcPGBxqBCQW/ThUlPcSPjuZpk9imzaVn20BMZHpPLbyc 3sv0wYVSp65eod2nGPwJoVRbqWUP8iwVx1qGbM406J7EQpm2oXaICC90d6Ro5TVuLo TYKUR+gfoaSSqlCAv5Zfuuwbhw+6pj13nHVJkk9nSAsT5zgqNZmrwO925lW0owpuEU q+J3Pg0hHwFJc1ESksWCrS/+yq6AvaPGtnZ/mQ6gvtWd96ELDPoDM/2hRZRbzxrJGH 3nVNacD9SmUqnDN3yL+Ud7ESjKc3S/vPtHWyLZM7vnsGB7Xxfh7jUyQGNO7ifQQ+BT knehqEHs781Uw== From: Leon Romanovsky To: Marek Szyprowski Cc: Leon Romanovsky , Jason Gunthorpe , Abdiel Janulgue , Alexander Potapenko , Alex Gaynor , Andrew Morton , Christoph Hellwig , Danilo Krummrich , iommu@lists.linux.dev, Jason Wang , Jens Axboe , Joerg Roedel , Jonathan Corbet , Juergen Gross , kasan-dev@googlegroups.com, Keith Busch , linux-block@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-nvme@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-trace-kernel@vger.kernel.org, Madhavan Srinivasan , Masami Hiramatsu , Michael Ellerman , "Michael S. Tsirkin" , Miguel Ojeda , Robin Murphy , rust-for-linux@vger.kernel.org, Sagi Grimberg , Stefano Stabellini , Steven Rostedt , virtualization@lists.linux.dev, Will Deacon , xen-devel@lists.xenproject.org Subject: [PATCH v4 06/16] iommu/dma: extend iommu_dma_*map_phys API to handle MMIO memory Date: Tue, 19 Aug 2025 20:36:50 +0300 Message-ID: <4f84639baf6d5d0e107fd2001dff91b6538ff9ae.1755624249.git.leon@kernel.org> X-Mailer: git-send-email 2.50.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Leon Romanovsky Combine iommu_dma_*map_phys with iommu_dma_*map_resource interfaces in order to allow single phys_addr_t flow. In the following patches, the iommu_dma_map_resource() will be removed in favour of iommu_dma_map_phys(..., attrs | DMA_ATTR_MMIO) flow. Signed-off-by: Leon Romanovsky Reviewed-by: Jason Gunthorpe --- drivers/iommu/dma-iommu.c | 15 +++++++++++---- 1 file changed, 11 insertions(+), 4 deletions(-) diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index aea119f32f96..6804aaf034a1 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -1211,16 +1211,19 @@ dma_addr_t iommu_dma_map_phys(struct device *dev, p= hys_addr_t phys, size_t size, */ if (dev_use_swiotlb(dev, size, dir) && iova_unaligned(iovad, phys, size)) { + if (attrs & DMA_ATTR_MMIO) + return DMA_MAPPING_ERROR; + phys =3D iommu_dma_map_swiotlb(dev, phys, size, dir, attrs); if (phys =3D=3D (phys_addr_t)DMA_MAPPING_ERROR) return DMA_MAPPING_ERROR; } =20 - if (!coherent && !(attrs & DMA_ATTR_SKIP_CPU_SYNC)) + if (!coherent && !(attrs & (DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_MMIO))) arch_sync_dma_for_device(phys, size, dir); =20 iova =3D __iommu_dma_map(dev, phys, size, prot, dma_mask); - if (iova =3D=3D DMA_MAPPING_ERROR) + if (iova =3D=3D DMA_MAPPING_ERROR && !(attrs & DMA_ATTR_MMIO)) swiotlb_tbl_unmap_single(dev, phys, size, dir, attrs); return iova; } @@ -1228,10 +1231,14 @@ dma_addr_t iommu_dma_map_phys(struct device *dev, p= hys_addr_t phys, size_t size, void iommu_dma_unmap_phys(struct device *dev, dma_addr_t dma_handle, size_t size, enum dma_data_direction dir, unsigned long attrs) { - struct iommu_domain *domain =3D iommu_get_dma_domain(dev); phys_addr_t phys; =20 - phys =3D iommu_iova_to_phys(domain, dma_handle); + if (attrs & DMA_ATTR_MMIO) { + __iommu_dma_unmap(dev, dma_handle, size); + return; + } + + phys =3D iommu_iova_to_phys(iommu_get_dma_domain(dev), dma_handle); if (WARN_ON(!phys)) return; =20 --=20 2.50.1 From nobody Sat Oct 4 06:33:20 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 124C530BF70; Tue, 19 Aug 2025 17:38:05 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1755625086; cv=none; b=E8PdxU5+XIknjwGFKyqR9ix2Piy2TFS5K5ZFsH6riEWOLDAa4z46MoELHPiYZ8aDdw8oMJxx2+09mOkV8wb4cpZeRjRsvm/LewdpQ+GyqGCAq4ElQQdUaJR6aGs4YeraAuu9TjPftS9aDpx9nrX1cnLvhwOCqQi/lZ6KVWnnNDM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1755625086; c=relaxed/simple; bh=vOnEIp8ay7pVjNUnKwOoXsO+91/AeyefXChuEBTVE5g=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=p0FsuKEsilbZh2s2F7mLjAHYqoO1/VyxbNDctKRW85wHMAG36jhapTPnFGdyznDmLIBCZEHd7UVCXag7kIZVu9vlU88SKTDEyj2eaDDNtcqA+l+hgz0GeFMg8c9NcPbCBOLBdcRxHANL0evtEWUPLWK0VkcpxdkKIhukNHT72RM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=rckaxg/q; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="rckaxg/q" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 763E9C4CEF4; Tue, 19 Aug 2025 17:38:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1755625085; bh=vOnEIp8ay7pVjNUnKwOoXsO+91/AeyefXChuEBTVE5g=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=rckaxg/qWggS3yq9CRcEqXJepOI3KHmAMFVfIxSq3QU8dYlaQtIY5AGAv+3ZAknrw va+9dIqf0dcgBkMCCgTJPh62w7lV6883lFxluwRpp8NV+Cu9sxFyFjRHfRgpPHQQU7 u6tUlVjWJJxQ+TETuPt4lw8dv8eZUaK/FebEmWLEzYynXw2fXQyieBYPhuEdRHLWxY GKsKmKSR79/UJS1Cwx6aDDiWWyjzR6EM43aSSEjB2zQ7ycrT4bCmQzsqogbyyCUd81 i/loHMusMs1LtWddq1dcYpxx3VKPYjG8sD/Z1438z7i4Y8MTS78q5XVw18G/G6APkq W75visNWTZpgg== From: Leon Romanovsky To: Marek Szyprowski Cc: Leon Romanovsky , Jason Gunthorpe , Abdiel Janulgue , Alexander Potapenko , Alex Gaynor , Andrew Morton , Christoph Hellwig , Danilo Krummrich , iommu@lists.linux.dev, Jason Wang , Jens Axboe , Joerg Roedel , Jonathan Corbet , Juergen Gross , kasan-dev@googlegroups.com, Keith Busch , linux-block@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-nvme@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-trace-kernel@vger.kernel.org, Madhavan Srinivasan , Masami Hiramatsu , Michael Ellerman , "Michael S. Tsirkin" , Miguel Ojeda , Robin Murphy , rust-for-linux@vger.kernel.org, Sagi Grimberg , Stefano Stabellini , Steven Rostedt , virtualization@lists.linux.dev, Will Deacon , xen-devel@lists.xenproject.org Subject: [PATCH v4 07/16] dma-mapping: convert dma_direct_*map_page to be phys_addr_t based Date: Tue, 19 Aug 2025 20:36:51 +0300 Message-ID: <3faa9c978e243a904ffe01496148c4563dc9274e.1755624249.git.leon@kernel.org> X-Mailer: git-send-email 2.50.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Leon Romanovsky Convert the DMA direct mapping functions to accept physical addresses directly instead of page+offset parameters. The functions were already operating on physical addresses internally, so this change eliminates the redundant page-to-physical conversion at the API boundary. The functions dma_direct_map_page() and dma_direct_unmap_page() are renamed to dma_direct_map_phys() and dma_direct_unmap_phys() respectively, with their calling convention changed from (struct page *page, unsigned long offset) to (phys_addr_t phys). Architecture-specific functions arch_dma_map_page_direct() and arch_dma_unmap_page_direct() are similarly renamed to arch_dma_map_phys_direct() and arch_dma_unmap_phys_direct(). The is_pci_p2pdma_page() checks are replaced with DMA_ATTR_MMIO checks to allow integration with dma_direct_map_resource and dma_direct_map_phys() is extended to support MMIO path either. Signed-off-by: Leon Romanovsky Reviewed-by: Jason Gunthorpe --- arch/powerpc/kernel/dma-iommu.c | 4 +-- include/linux/dma-map-ops.h | 8 ++--- kernel/dma/direct.c | 6 ++-- kernel/dma/direct.h | 52 +++++++++++++++++++++------------ kernel/dma/mapping.c | 8 ++--- 5 files changed, 46 insertions(+), 32 deletions(-) diff --git a/arch/powerpc/kernel/dma-iommu.c b/arch/powerpc/kernel/dma-iomm= u.c index 4d64a5db50f3..0359ab72cd3b 100644 --- a/arch/powerpc/kernel/dma-iommu.c +++ b/arch/powerpc/kernel/dma-iommu.c @@ -14,7 +14,7 @@ #define can_map_direct(dev, addr) \ ((dev)->bus_dma_limit >=3D phys_to_dma((dev), (addr))) =20 -bool arch_dma_map_page_direct(struct device *dev, phys_addr_t addr) +bool arch_dma_map_phys_direct(struct device *dev, phys_addr_t addr) { if (likely(!dev->bus_dma_limit)) return false; @@ -24,7 +24,7 @@ bool arch_dma_map_page_direct(struct device *dev, phys_ad= dr_t addr) =20 #define is_direct_handle(dev, h) ((h) >=3D (dev)->archdata.dma_offset) =20 -bool arch_dma_unmap_page_direct(struct device *dev, dma_addr_t dma_handle) +bool arch_dma_unmap_phys_direct(struct device *dev, dma_addr_t dma_handle) { if (likely(!dev->bus_dma_limit)) return false; diff --git a/include/linux/dma-map-ops.h b/include/linux/dma-map-ops.h index f48e5fb88bd5..71f5b3025415 100644 --- a/include/linux/dma-map-ops.h +++ b/include/linux/dma-map-ops.h @@ -392,15 +392,15 @@ void *arch_dma_set_uncached(void *addr, size_t size); void arch_dma_clear_uncached(void *addr, size_t size); =20 #ifdef CONFIG_ARCH_HAS_DMA_MAP_DIRECT -bool arch_dma_map_page_direct(struct device *dev, phys_addr_t addr); -bool arch_dma_unmap_page_direct(struct device *dev, dma_addr_t dma_handle); +bool arch_dma_map_phys_direct(struct device *dev, phys_addr_t addr); +bool arch_dma_unmap_phys_direct(struct device *dev, dma_addr_t dma_handle); bool arch_dma_map_sg_direct(struct device *dev, struct scatterlist *sg, int nents); bool arch_dma_unmap_sg_direct(struct device *dev, struct scatterlist *sg, int nents); #else -#define arch_dma_map_page_direct(d, a) (false) -#define arch_dma_unmap_page_direct(d, a) (false) +#define arch_dma_map_phys_direct(d, a) (false) +#define arch_dma_unmap_phys_direct(d, a) (false) #define arch_dma_map_sg_direct(d, s, n) (false) #define arch_dma_unmap_sg_direct(d, s, n) (false) #endif diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index 24c359d9c879..fa75e3070073 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -453,7 +453,7 @@ void dma_direct_unmap_sg(struct device *dev, struct sca= tterlist *sgl, if (sg_dma_is_bus_address(sg)) sg_dma_unmark_bus_address(sg); else - dma_direct_unmap_page(dev, sg->dma_address, + dma_direct_unmap_phys(dev, sg->dma_address, sg_dma_len(sg), dir, attrs); } } @@ -476,8 +476,8 @@ int dma_direct_map_sg(struct device *dev, struct scatte= rlist *sgl, int nents, */ break; case PCI_P2PDMA_MAP_NONE: - sg->dma_address =3D dma_direct_map_page(dev, sg_page(sg), - sg->offset, sg->length, dir, attrs); + sg->dma_address =3D dma_direct_map_phys(dev, sg_phys(sg), + sg->length, dir, attrs); if (sg->dma_address =3D=3D DMA_MAPPING_ERROR) { ret =3D -EIO; goto out_unmap; diff --git a/kernel/dma/direct.h b/kernel/dma/direct.h index d2c0b7e632fc..92dbadcd3b2f 100644 --- a/kernel/dma/direct.h +++ b/kernel/dma/direct.h @@ -80,42 +80,56 @@ static inline void dma_direct_sync_single_for_cpu(struc= t device *dev, arch_dma_mark_clean(paddr, size); } =20 -static inline dma_addr_t dma_direct_map_page(struct device *dev, - struct page *page, unsigned long offset, size_t size, - enum dma_data_direction dir, unsigned long attrs) +static inline dma_addr_t dma_direct_map_phys(struct device *dev, + phys_addr_t phys, size_t size, enum dma_data_direction dir, + unsigned long attrs) { - phys_addr_t phys =3D page_to_phys(page) + offset; - dma_addr_t dma_addr =3D phys_to_dma(dev, phys); + dma_addr_t dma_addr; + bool capable; =20 if (is_swiotlb_force_bounce(dev)) { - if (is_pci_p2pdma_page(page)) - return DMA_MAPPING_ERROR; + if (attrs & DMA_ATTR_MMIO) + goto err_overflow; + return swiotlb_map(dev, phys, size, dir, attrs); } =20 - if (unlikely(!dma_capable(dev, dma_addr, size, true)) || - dma_kmalloc_needs_bounce(dev, size, dir)) { - if (is_pci_p2pdma_page(page)) - return DMA_MAPPING_ERROR; - if (is_swiotlb_active(dev)) + if (attrs & DMA_ATTR_MMIO) + dma_addr =3D phys; + else + dma_addr =3D phys_to_dma(dev, phys); + + capable =3D dma_capable(dev, dma_addr, size, !(attrs & DMA_ATTR_MMIO)); + if (unlikely(!capable) || dma_kmalloc_needs_bounce(dev, size, dir)) { + if (is_swiotlb_active(dev) && !(attrs & DMA_ATTR_MMIO)) return swiotlb_map(dev, phys, size, dir, attrs); =20 - dev_WARN_ONCE(dev, 1, - "DMA addr %pad+%zu overflow (mask %llx, bus limit %llx).\n", - &dma_addr, size, *dev->dma_mask, dev->bus_dma_limit); - return DMA_MAPPING_ERROR; + goto err_overflow; } =20 - if (!dev_is_dma_coherent(dev) && !(attrs & DMA_ATTR_SKIP_CPU_SYNC)) + if (!dev_is_dma_coherent(dev) && + !(attrs & (DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_MMIO))) arch_sync_dma_for_device(phys, size, dir); return dma_addr; + +err_overflow: + dev_WARN_ONCE( + dev, 1, + "DMA addr %pad+%zu overflow (mask %llx, bus limit %llx).\n", + &dma_addr, size, *dev->dma_mask, dev->bus_dma_limit); + return DMA_MAPPING_ERROR; } =20 -static inline void dma_direct_unmap_page(struct device *dev, dma_addr_t ad= dr, +static inline void dma_direct_unmap_phys(struct device *dev, dma_addr_t ad= dr, size_t size, enum dma_data_direction dir, unsigned long attrs) { - phys_addr_t phys =3D dma_to_phys(dev, addr); + phys_addr_t phys; + + if (attrs & DMA_ATTR_MMIO) + /* nothing to do: uncached and no swiotlb */ + return; =20 + phys =3D dma_to_phys(dev, addr); if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC)) dma_direct_sync_single_for_cpu(dev, addr, size, dir); =20 diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c index 58482536db9b..80481a873340 100644 --- a/kernel/dma/mapping.c +++ b/kernel/dma/mapping.c @@ -166,8 +166,8 @@ dma_addr_t dma_map_page_attrs(struct device *dev, struc= t page *page, return DMA_MAPPING_ERROR; =20 if (dma_map_direct(dev, ops) || - arch_dma_map_page_direct(dev, phys + size)) - addr =3D dma_direct_map_page(dev, page, offset, size, dir, attrs); + arch_dma_map_phys_direct(dev, phys + size)) + addr =3D dma_direct_map_phys(dev, phys, size, dir, attrs); else if (use_dma_iommu(dev)) addr =3D iommu_dma_map_phys(dev, phys, size, dir, attrs); else @@ -187,8 +187,8 @@ void dma_unmap_page_attrs(struct device *dev, dma_addr_= t addr, size_t size, =20 BUG_ON(!valid_dma_direction(dir)); if (dma_map_direct(dev, ops) || - arch_dma_unmap_page_direct(dev, addr + size)) - dma_direct_unmap_page(dev, addr, size, dir, attrs); + arch_dma_unmap_phys_direct(dev, addr + size)) + dma_direct_unmap_phys(dev, addr, size, dir, attrs); else if (use_dma_iommu(dev)) iommu_dma_unmap_phys(dev, addr, size, dir, attrs); else --=20 2.50.1 From nobody Sat Oct 4 06:33:20 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 77B3530E831; Tue, 19 Aug 2025 17:38:17 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1755625098; cv=none; b=SOz3M8dR/BuDvaZxiDw3PlG+G41+oRHQy/myd+FgyjbMZ6rKg01rWdzhrbDSvTwON6jzC2zFvw4DglqQY9p/jtqOzdsYdCT95nNaRSbFtnjKMXPoQBwtNP/jmzUfrbruKH3HwZddTcra1eNCknlYsLs44zbsAJa2GWi7Cr7iY1M= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1755625098; c=relaxed/simple; bh=KLKjw0HUWCAlCYBL21FG5N5qspieejASWPS5ip7uwpQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=NFgCE8aHkTcC7OH21+KkIN7TQhmFn8QvasrwHC33bq4EqqfF108K2wcejGTcE1nbNRXklFv7WV7+L66WSz8ouoZ58H6aA7o3L8FV0/8vDLMH/wcXhmjrYaly0mdcVoRMWwqVIq3o5Kr2RHTypt/VCLi4KhHMGKlJ+z5zdC0LZBA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=oNMOGF7B; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="oNMOGF7B" Received: by smtp.kernel.org (Postfix) with ESMTPSA id E9ED1C4CEF1; Tue, 19 Aug 2025 17:38:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1755625097; bh=KLKjw0HUWCAlCYBL21FG5N5qspieejASWPS5ip7uwpQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=oNMOGF7BydGVTevhYMQSzbeUeUqo3X5k1CxovpHSkGA8/jaiePBiX9OOrhENsdH4R 7iQWfe2b/d/0g9OrkJiy+fkXJNNgwHp9E/t2KxhkRC6Xvupy3dfvxRLkOUV6eTSZ3A uAXIz8z2i47k+jghhdlvPBGJEjZ637z4/gxVNxHJS6CJCC26rLC3/iBCFiChfhik8E uJKgIvsPhjEJ14VXVrqQ7eV73Mh1ABS3byCgHYlWGKQzqw7Fwc5HlIZU+usmrZHInt JGWFTFFbeq8ckv3NjoQg57zVH1eKNg7U9Ry4zOulQ4rGFslisZm3zJdjHbMcDOIe9T +IZ0yFZtXVoSg== From: Leon Romanovsky To: Marek Szyprowski Cc: Leon Romanovsky , Jason Gunthorpe , Abdiel Janulgue , Alexander Potapenko , Alex Gaynor , Andrew Morton , Christoph Hellwig , Danilo Krummrich , iommu@lists.linux.dev, Jason Wang , Jens Axboe , Joerg Roedel , Jonathan Corbet , Juergen Gross , kasan-dev@googlegroups.com, Keith Busch , linux-block@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-nvme@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-trace-kernel@vger.kernel.org, Madhavan Srinivasan , Masami Hiramatsu , Michael Ellerman , "Michael S. Tsirkin" , Miguel Ojeda , Robin Murphy , rust-for-linux@vger.kernel.org, Sagi Grimberg , Stefano Stabellini , Steven Rostedt , virtualization@lists.linux.dev, Will Deacon , xen-devel@lists.xenproject.org Subject: [PATCH v4 08/16] kmsan: convert kmsan_handle_dma to use physical addresses Date: Tue, 19 Aug 2025 20:36:52 +0300 Message-ID: X-Mailer: git-send-email 2.50.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Leon Romanovsky Convert the KMSAN DMA handling function from page-based to physical address-based interface. The refactoring renames kmsan_handle_dma() parameters from accepting (struct page *page, size_t offset, size_t size) to (phys_addr_t phys, size_t size). The existing semantics where callers are expected to provide only kmap memory is continued here. Signed-off-by: Leon Romanovsky Reviewed-by: Jason Gunthorpe --- drivers/virtio/virtio_ring.c | 4 ++-- include/linux/kmsan.h | 9 ++++----- kernel/dma/mapping.c | 3 ++- mm/kmsan/hooks.c | 5 +++-- tools/virtio/linux/kmsan.h | 2 +- 5 files changed, 12 insertions(+), 11 deletions(-) diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c index f5062061c408..c147145a6593 100644 --- a/drivers/virtio/virtio_ring.c +++ b/drivers/virtio/virtio_ring.c @@ -378,7 +378,7 @@ static int vring_map_one_sg(const struct vring_virtqueu= e *vq, struct scatterlist * is initialized by the hardware. Explicitly check/unpoison it * depending on the direction. */ - kmsan_handle_dma(sg_page(sg), sg->offset, sg->length, direction); + kmsan_handle_dma(sg_phys(sg), sg->length, direction); *addr =3D (dma_addr_t)sg_phys(sg); return 0; } @@ -3157,7 +3157,7 @@ dma_addr_t virtqueue_dma_map_single_attrs(struct virt= queue *_vq, void *ptr, struct vring_virtqueue *vq =3D to_vvq(_vq); =20 if (!vq->use_dma_api) { - kmsan_handle_dma(virt_to_page(ptr), offset_in_page(ptr), size, dir); + kmsan_handle_dma(virt_to_phys(ptr), size, dir); return (dma_addr_t)virt_to_phys(ptr); } =20 diff --git a/include/linux/kmsan.h b/include/linux/kmsan.h index 2b1432cc16d5..f2fd221107bb 100644 --- a/include/linux/kmsan.h +++ b/include/linux/kmsan.h @@ -182,8 +182,7 @@ void kmsan_iounmap_page_range(unsigned long start, unsi= gned long end); =20 /** * kmsan_handle_dma() - Handle a DMA data transfer. - * @page: first page of the buffer. - * @offset: offset of the buffer within the first page. + * @phys: physical address of the buffer. * @size: buffer size. * @dir: one of possible dma_data_direction values. * @@ -192,7 +191,7 @@ void kmsan_iounmap_page_range(unsigned long start, unsi= gned long end); * * initializes the buffer, if it is copied from device; * * does both, if this is a DMA_BIDIRECTIONAL transfer. */ -void kmsan_handle_dma(struct page *page, size_t offset, size_t size, +void kmsan_handle_dma(phys_addr_t phys, size_t size, enum dma_data_direction dir); =20 /** @@ -372,8 +371,8 @@ static inline void kmsan_iounmap_page_range(unsigned lo= ng start, { } =20 -static inline void kmsan_handle_dma(struct page *page, size_t offset, - size_t size, enum dma_data_direction dir) +static inline void kmsan_handle_dma(phys_addr_t phys, size_t size, + enum dma_data_direction dir) { } =20 diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c index 80481a873340..891e1fc3e582 100644 --- a/kernel/dma/mapping.c +++ b/kernel/dma/mapping.c @@ -172,7 +172,8 @@ dma_addr_t dma_map_page_attrs(struct device *dev, struc= t page *page, addr =3D iommu_dma_map_phys(dev, phys, size, dir, attrs); else addr =3D ops->map_page(dev, page, offset, size, dir, attrs); - kmsan_handle_dma(page, offset, size, dir); + + kmsan_handle_dma(phys, size, dir); trace_dma_map_phys(dev, phys, addr, size, dir, attrs); debug_dma_map_phys(dev, phys, size, dir, addr, attrs); =20 diff --git a/mm/kmsan/hooks.c b/mm/kmsan/hooks.c index 97de3d6194f0..6de5c4820330 100644 --- a/mm/kmsan/hooks.c +++ b/mm/kmsan/hooks.c @@ -336,14 +336,15 @@ static void kmsan_handle_dma_page(const void *addr, s= ize_t size, } =20 /* Helper function to handle DMA data transfers. */ -void kmsan_handle_dma(struct page *page, size_t offset, size_t size, +void kmsan_handle_dma(phys_addr_t phys, size_t size, enum dma_data_direction dir) { + struct page *page =3D phys_to_page(phys); u64 page_offset, to_go, addr; =20 if (PageHighMem(page)) return; - addr =3D (u64)page_address(page) + offset; + addr =3D (u64)page_address(page) + offset_in_page(phys); /* * The kernel may occasionally give us adjacent DMA pages not belonging * to the same allocation. Process them separately to avoid triggering diff --git a/tools/virtio/linux/kmsan.h b/tools/virtio/linux/kmsan.h index 272b5aa285d5..6cd2e3efd03d 100644 --- a/tools/virtio/linux/kmsan.h +++ b/tools/virtio/linux/kmsan.h @@ -4,7 +4,7 @@ =20 #include =20 -inline void kmsan_handle_dma(struct page *page, size_t offset, size_t size, +inline void kmsan_handle_dma(phys_addr_t phys, size_t size, enum dma_data_direction dir) { } --=20 2.50.1 From nobody Sat Oct 4 06:33:20 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 53F7930E0F2; Tue, 19 Aug 2025 17:38:11 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1755625091; cv=none; b=YP6bo6gQ4OjjU7ym8xV3GlNwIxSQXnWISifEUc7kIk4li3uSjtm/Cfq60ZHSdCc/g+Oo36OMKDFtszOZFdbWcVGFP0rR0qkZ8MDh7TOsEdLl5cDLz1lrczGaxianch/Bb/H1dLfVtnaNoTOUV2xorQATm8gzkxG76Nkk6i9IcBk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1755625091; c=relaxed/simple; bh=6kQRc8M4kSTN6oNlv6UupesxmLI8KrU+a1TL4nhd0iE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=lm3orTS3f3I/AkUOV2ngvsE6vIYnosDLM7XVr5XR3dCgFV3QXZBFvR325/QvTyCY50A6CN6hxD4+XmLj6Y1AF8Th/UTuZkjrULx9M2oMYD8yQZ+vKSRQk2JPILACzmrchof0dEz5UAtiGBj5Pz56P9Vh0+12Puoya+NQSOkxQ9U= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=f0VJAtsV; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="f0VJAtsV" Received: by smtp.kernel.org (Postfix) with ESMTPSA id E0661C4CEF4; Tue, 19 Aug 2025 17:38:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1755625091; bh=6kQRc8M4kSTN6oNlv6UupesxmLI8KrU+a1TL4nhd0iE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=f0VJAtsVM5u2o6C1nUp18HimKN2EfCmaIhrfHn2QNSc0GqPof7zC6NvZWtW7WxBRR p3RzrCBavtqMekvMpM+FFh/Cujilw/RqORiadfmgV0O0WPFzb3zoh9fG9NtA2SaKoT 4uJuPVNsL+wJyPMLIlFVRwue/XQQMijoKpQiX1gYXGPLJHRcRcKooNhIu/RI+O4xTw brcapoHDidTXn3mbpESixmE7uBDkslSxKOYNfAAHP/Nup/cmkLiUsTvpENQ4pVyy+t J22UXxZ5+SPlhyjgcQ8KrBLrryIlymPi7z68rps63VS+W0pkPza1dnTv8/Usu1mzsw k69P5i9HZUFVA== From: Leon Romanovsky To: Marek Szyprowski Cc: Leon Romanovsky , Jason Gunthorpe , Abdiel Janulgue , Alexander Potapenko , Alex Gaynor , Andrew Morton , Christoph Hellwig , Danilo Krummrich , iommu@lists.linux.dev, Jason Wang , Jens Axboe , Joerg Roedel , Jonathan Corbet , Juergen Gross , kasan-dev@googlegroups.com, Keith Busch , linux-block@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-nvme@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-trace-kernel@vger.kernel.org, Madhavan Srinivasan , Masami Hiramatsu , Michael Ellerman , "Michael S. Tsirkin" , Miguel Ojeda , Robin Murphy , rust-for-linux@vger.kernel.org, Sagi Grimberg , Stefano Stabellini , Steven Rostedt , virtualization@lists.linux.dev, Will Deacon , xen-devel@lists.xenproject.org Subject: [PATCH v4 09/16] dma-mapping: handle MMIO flow in dma_map|unmap_page Date: Tue, 19 Aug 2025 20:36:53 +0300 Message-ID: X-Mailer: git-send-email 2.50.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Leon Romanovsky Extend base DMA page API to handle MMIO flow and follow existing dma_map_resource() implementation to rely on dma_map_direct() only to take DMA direct path. Signed-off-by: Leon Romanovsky Reviewed-by: Jason Gunthorpe --- kernel/dma/mapping.c | 26 +++++++++++++++++++++----- 1 file changed, 21 insertions(+), 5 deletions(-) diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c index 891e1fc3e582..fdabfdaeff1d 100644 --- a/kernel/dma/mapping.c +++ b/kernel/dma/mapping.c @@ -158,6 +158,7 @@ dma_addr_t dma_map_page_attrs(struct device *dev, struc= t page *page, { const struct dma_map_ops *ops =3D get_dma_ops(dev); phys_addr_t phys =3D page_to_phys(page) + offset; + bool is_mmio =3D attrs & DMA_ATTR_MMIO; dma_addr_t addr; =20 BUG_ON(!valid_dma_direction(dir)); @@ -166,14 +167,25 @@ dma_addr_t dma_map_page_attrs(struct device *dev, str= uct page *page, return DMA_MAPPING_ERROR; =20 if (dma_map_direct(dev, ops) || - arch_dma_map_phys_direct(dev, phys + size)) + (!is_mmio && arch_dma_map_phys_direct(dev, phys + size))) addr =3D dma_direct_map_phys(dev, phys, size, dir, attrs); else if (use_dma_iommu(dev)) addr =3D iommu_dma_map_phys(dev, phys, size, dir, attrs); - else + else if (is_mmio) { + if (!ops->map_resource) + return DMA_MAPPING_ERROR; + + addr =3D ops->map_resource(dev, phys, size, dir, attrs); + } else { + /* + * The dma_ops API contract for ops->map_page() requires + * kmappable memory, while ops->map_resource() does not. + */ addr =3D ops->map_page(dev, page, offset, size, dir, attrs); + } =20 - kmsan_handle_dma(phys, size, dir); + if (!is_mmio) + kmsan_handle_dma(phys, size, dir); trace_dma_map_phys(dev, phys, addr, size, dir, attrs); debug_dma_map_phys(dev, phys, size, dir, addr, attrs); =20 @@ -185,14 +197,18 @@ void dma_unmap_page_attrs(struct device *dev, dma_add= r_t addr, size_t size, enum dma_data_direction dir, unsigned long attrs) { const struct dma_map_ops *ops =3D get_dma_ops(dev); + bool is_mmio =3D attrs & DMA_ATTR_MMIO; =20 BUG_ON(!valid_dma_direction(dir)); if (dma_map_direct(dev, ops) || - arch_dma_unmap_phys_direct(dev, addr + size)) + (!is_mmio && arch_dma_unmap_phys_direct(dev, addr + size))) dma_direct_unmap_phys(dev, addr, size, dir, attrs); else if (use_dma_iommu(dev)) iommu_dma_unmap_phys(dev, addr, size, dir, attrs); - else + else if (is_mmio) { + if (ops->unmap_resource) + ops->unmap_resource(dev, addr, size, dir, attrs); + } else ops->unmap_page(dev, addr, size, dir, attrs); trace_dma_unmap_phys(dev, addr, size, dir, attrs); debug_dma_unmap_phys(dev, addr, size, dir); --=20 2.50.1 From nobody Sat Oct 4 06:33:20 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D973E341ACD; Tue, 19 Aug 2025 17:38:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1755625110; cv=none; b=t4e0znte+JbAmvsag7UroBMO5l6RBJnr6gTie9vc+uxIHChEtTrwOxNSIwC6/DZNEmLy2Z/F60PjdV3witOqCSsiZsUEJddNHAlMa5tFeuKrIdlPnYhIcsu4MnWFcAH7B1by8TjlhuK6cKQV/W1U+RSJSjnlQlRFNHvlp+GMI+o= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1755625110; c=relaxed/simple; bh=0mJoy+46tPnckyRiCMrZ6rM11HgiRAqoXhfeDURewQs=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=rVofnsLLmtILznKuDpiB8UpmR2HlngWmCLBoPFtYHoJhSeXdvsebmSmmyMs8nUX6Q+qpAwUJMxIiJntvecL+6N3GTZuEuUBK9B48hfX0astTKbGDoQoyJIQx8k5RwFWoeDntSNCjS1bI4nUcsn2XD2PqoAw/1UodtnJ1kaylvQU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=mqPmNYet; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="mqPmNYet" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 55B1EC4CEF4; Tue, 19 Aug 2025 17:38:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1755625109; bh=0mJoy+46tPnckyRiCMrZ6rM11HgiRAqoXhfeDURewQs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=mqPmNYet1sH32wQnvqz+VvZ80hlj8hmAn5LZc210nMHXG8bXYuHHo6lQnpRQ7wdDm AVkPPjVfSjLNW0DehbYvlIM2mXfpwOQAjPxs6e3RlPIo73ds9O4X7lgEElc8ORAi/f C/x1WmcQqUdWnZImWQk08xqztT32jxKdhvXMOS9p8adbr0qesEwjEYXcIE7Gr89B9V D5kxqWCfk0RMLfyN34FLwGfWViKqrxfYSCkMcFA4I+yentRF7/d5qM2RviS+FzeTgP DNhrTd61vuyUNS97sahLp3RYlTbC6p901YEbBgWhZmWbk3sGcF6U+DBlT0qK2dJfvC h1+6GBB7poGFg== From: Leon Romanovsky To: Marek Szyprowski Cc: Leon Romanovsky , Jason Gunthorpe , Abdiel Janulgue , Alexander Potapenko , Alex Gaynor , Andrew Morton , Christoph Hellwig , Danilo Krummrich , iommu@lists.linux.dev, Jason Wang , Jens Axboe , Joerg Roedel , Jonathan Corbet , Juergen Gross , kasan-dev@googlegroups.com, Keith Busch , linux-block@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-nvme@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-trace-kernel@vger.kernel.org, Madhavan Srinivasan , Masami Hiramatsu , Michael Ellerman , "Michael S. Tsirkin" , Miguel Ojeda , Robin Murphy , rust-for-linux@vger.kernel.org, Sagi Grimberg , Stefano Stabellini , Steven Rostedt , virtualization@lists.linux.dev, Will Deacon , xen-devel@lists.xenproject.org Subject: [PATCH v4 10/16] xen: swiotlb: Open code map_resource callback Date: Tue, 19 Aug 2025 20:36:54 +0300 Message-ID: X-Mailer: git-send-email 2.50.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Leon Romanovsky General dma_direct_map_resource() is going to be removed in next patch, so simply open-code it in xen driver. Reviewed-by: Juergen Gross Signed-off-by: Leon Romanovsky Reviewed-by: Jason Gunthorpe --- drivers/xen/swiotlb-xen.c | 21 ++++++++++++++++++++- 1 file changed, 20 insertions(+), 1 deletion(-) diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c index da1a7d3d377c..dd7747a2de87 100644 --- a/drivers/xen/swiotlb-xen.c +++ b/drivers/xen/swiotlb-xen.c @@ -392,6 +392,25 @@ xen_swiotlb_sync_sg_for_device(struct device *dev, str= uct scatterlist *sgl, } } =20 +static dma_addr_t xen_swiotlb_direct_map_resource(struct device *dev, + phys_addr_t paddr, + size_t size, + enum dma_data_direction dir, + unsigned long attrs) +{ + dma_addr_t dma_addr =3D paddr; + + if (unlikely(!dma_capable(dev, dma_addr, size, false))) { + dev_err_once(dev, + "DMA addr %pad+%zu overflow (mask %llx, bus limit %llx).\n", + &dma_addr, size, *dev->dma_mask, dev->bus_dma_limit); + WARN_ON_ONCE(1); + return DMA_MAPPING_ERROR; + } + + return dma_addr; +} + /* * Return whether the given device DMA address mask can be supported * properly. For example, if your device can only drive the low 24-bits @@ -426,5 +445,5 @@ const struct dma_map_ops xen_swiotlb_dma_ops =3D { .alloc_pages_op =3D dma_common_alloc_pages, .free_pages =3D dma_common_free_pages, .max_mapping_size =3D swiotlb_max_mapping_size, - .map_resource =3D dma_direct_map_resource, + .map_resource =3D xen_swiotlb_direct_map_resource, }; --=20 2.50.1 From nobody Sat Oct 4 06:33:20 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2437530E836; Tue, 19 Aug 2025 17:38:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1755625104; cv=none; b=B6P7ZbIavEZQ7fwgf9Xpyz9oC9jQt+pP/oe878rmSLY3vyJukrggvk7xRqpsO1tXMnFYgZQ0qRZirwwWg3fVqplGafLmO7DUmnRK9VGckk4Va0PQhtqKA9656PibZKknl2pe0HsFFoQDViN2zaDBLR6tZ2v+GIs4eOaYxVSd8KU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1755625104; c=relaxed/simple; bh=BkEmu3z7RRGlp2YN9VORvYX5wSDL+7OOgFuObGZzrDs=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=QrNUPF4rP0ral9bnvgyh3l3+98EDGPqjF3vh+IV4puyYqcwMrTo6YlBYUjBw2Q4TBzQ5ubzmThAVKl5jxq9rq01TseeyXWHwznH+vXM3PePr3N/Q8y6aNtj6vCowLd60ls8ySFuf8VQqIptNJ1vUF7JmTL1FHRNtUBIvAF0if4U= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=fqZWHIaP; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="fqZWHIaP" Received: by smtp.kernel.org (Postfix) with ESMTPSA id A9ADAC4CEF4; Tue, 19 Aug 2025 17:38:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1755625103; bh=BkEmu3z7RRGlp2YN9VORvYX5wSDL+7OOgFuObGZzrDs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=fqZWHIaPxmym3ES9wSCC61mCQuVB6YNODqqEDE/lbwZG2XspwKEnjwHVAZ9AcIPa3 t709YFrkcihl72/5zKvoSuzsktD5uf94bGRJiBV3UhJbBK+0QSwOIKstILNXKftRXT nRLHqFn8rTZXE/xnmTbt5z/Rb2zRv0RVRvV4oqgXXmyrPVGfP3CR3zErs72ufFoC8b NPRpmlmae297Qsn6hEAMRoCoDXtqLmSArEFGS8y64h17gMofkpbJRIYZ85y8PajGaZ y9EJQynlRN4iDR+7zPNeKxpydrvbS44TexooV0q/H6Ah2/eyzY8W/Vrq0UZV/AfN+p Fc0TIXOJC71fA== From: Leon Romanovsky To: Marek Szyprowski Cc: Leon Romanovsky , Jason Gunthorpe , Abdiel Janulgue , Alexander Potapenko , Alex Gaynor , Andrew Morton , Christoph Hellwig , Danilo Krummrich , iommu@lists.linux.dev, Jason Wang , Jens Axboe , Joerg Roedel , Jonathan Corbet , Juergen Gross , kasan-dev@googlegroups.com, Keith Busch , linux-block@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-nvme@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-trace-kernel@vger.kernel.org, Madhavan Srinivasan , Masami Hiramatsu , Michael Ellerman , "Michael S. Tsirkin" , Miguel Ojeda , Robin Murphy , rust-for-linux@vger.kernel.org, Sagi Grimberg , Stefano Stabellini , Steven Rostedt , virtualization@lists.linux.dev, Will Deacon , xen-devel@lists.xenproject.org Subject: [PATCH v4 11/16] dma-mapping: export new dma_*map_phys() interface Date: Tue, 19 Aug 2025 20:36:55 +0300 Message-ID: X-Mailer: git-send-email 2.50.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Leon Romanovsky Introduce new DMA mapping functions dma_map_phys() and dma_unmap_phys() that operate directly on physical addresses instead of page+offset parameters. This provides a more efficient interface for drivers that already have physical addresses available. The new functions are implemented as the primary mapping layer, with the existing dma_map_page_attrs()/dma_map_resource() and dma_unmap_page_attrs()/dma_unmap_resource() functions converted to simple wrappers around the phys-based implementations. In case dma_map_page_attrs(), the struct page is converted to physical address with help of page_to_phys() function and dma_map_resource() provides physical address as is together with addition of DMA_ATTR_MMIO attribute. The old page-based API is preserved in mapping.c to ensure that existing code won't be affected by changing EXPORT_SYMBOL to EXPORT_SYMBOL_GPL variant for dma_*map_phys(). Signed-off-by: Leon Romanovsky Reviewed-by: Jason Gunthorpe Reviewed-by: Keith Busch --- drivers/iommu/dma-iommu.c | 14 -------- include/linux/dma-direct.h | 2 -- include/linux/dma-mapping.h | 13 +++++++ include/linux/iommu-dma.h | 4 --- include/trace/events/dma.h | 2 -- kernel/dma/debug.c | 43 ----------------------- kernel/dma/debug.h | 21 ----------- kernel/dma/direct.c | 16 --------- kernel/dma/mapping.c | 69 ++++++++++++++++++++----------------- 9 files changed, 50 insertions(+), 134 deletions(-) diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index 6804aaf034a1..7944a3af4545 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -1556,20 +1556,6 @@ void iommu_dma_unmap_sg(struct device *dev, struct s= catterlist *sg, int nents, __iommu_dma_unmap(dev, start, end - start); } =20 -dma_addr_t iommu_dma_map_resource(struct device *dev, phys_addr_t phys, - size_t size, enum dma_data_direction dir, unsigned long attrs) -{ - return __iommu_dma_map(dev, phys, size, - dma_info_to_prot(dir, false, attrs) | IOMMU_MMIO, - dma_get_mask(dev)); -} - -void iommu_dma_unmap_resource(struct device *dev, dma_addr_t handle, - size_t size, enum dma_data_direction dir, unsigned long attrs) -{ - __iommu_dma_unmap(dev, handle, size); -} - static void __iommu_dma_free(struct device *dev, size_t size, void *cpu_ad= dr) { size_t alloc_size =3D PAGE_ALIGN(size); diff --git a/include/linux/dma-direct.h b/include/linux/dma-direct.h index f3bc0bcd7098..c249912456f9 100644 --- a/include/linux/dma-direct.h +++ b/include/linux/dma-direct.h @@ -149,7 +149,5 @@ void dma_direct_free_pages(struct device *dev, size_t s= ize, struct page *page, dma_addr_t dma_addr, enum dma_data_direction dir); int dma_direct_supported(struct device *dev, u64 mask); -dma_addr_t dma_direct_map_resource(struct device *dev, phys_addr_t paddr, - size_t size, enum dma_data_direction dir, unsigned long attrs); =20 #endif /* _LINUX_DMA_DIRECT_H */ diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h index 4254fd9bdf5d..8248ff9363ee 100644 --- a/include/linux/dma-mapping.h +++ b/include/linux/dma-mapping.h @@ -138,6 +138,10 @@ dma_addr_t dma_map_page_attrs(struct device *dev, stru= ct page *page, unsigned long attrs); void dma_unmap_page_attrs(struct device *dev, dma_addr_t addr, size_t size, enum dma_data_direction dir, unsigned long attrs); +dma_addr_t dma_map_phys(struct device *dev, phys_addr_t phys, size_t size, + enum dma_data_direction dir, unsigned long attrs); +void dma_unmap_phys(struct device *dev, dma_addr_t addr, size_t size, + enum dma_data_direction dir, unsigned long attrs); unsigned int dma_map_sg_attrs(struct device *dev, struct scatterlist *sg, int nents, enum dma_data_direction dir, unsigned long attrs); void dma_unmap_sg_attrs(struct device *dev, struct scatterlist *sg, @@ -192,6 +196,15 @@ static inline void dma_unmap_page_attrs(struct device = *dev, dma_addr_t addr, size_t size, enum dma_data_direction dir, unsigned long attrs) { } +static inline dma_addr_t dma_map_phys(struct device *dev, phys_addr_t phys, + size_t size, enum dma_data_direction dir, unsigned long attrs) +{ + return DMA_MAPPING_ERROR; +} +static inline void dma_unmap_phys(struct device *dev, dma_addr_t addr, + size_t size, enum dma_data_direction dir, unsigned long attrs) +{ +} static inline unsigned int dma_map_sg_attrs(struct device *dev, struct scatterlist *sg, int nents, enum dma_data_direction dir, unsigned long attrs) diff --git a/include/linux/iommu-dma.h b/include/linux/iommu-dma.h index 485bdffed988..a92b3ff9b934 100644 --- a/include/linux/iommu-dma.h +++ b/include/linux/iommu-dma.h @@ -42,10 +42,6 @@ size_t iommu_dma_opt_mapping_size(void); size_t iommu_dma_max_mapping_size(struct device *dev); void iommu_dma_free(struct device *dev, size_t size, void *cpu_addr, dma_addr_t handle, unsigned long attrs); -dma_addr_t iommu_dma_map_resource(struct device *dev, phys_addr_t phys, - size_t size, enum dma_data_direction dir, unsigned long attrs); -void iommu_dma_unmap_resource(struct device *dev, dma_addr_t handle, - size_t size, enum dma_data_direction dir, unsigned long attrs); struct sg_table *iommu_dma_alloc_noncontiguous(struct device *dev, size_t = size, enum dma_data_direction dir, gfp_t gfp, unsigned long attrs); void iommu_dma_free_noncontiguous(struct device *dev, size_t size, diff --git a/include/trace/events/dma.h b/include/trace/events/dma.h index 84416c7d6bfa..5da59fd8121d 100644 --- a/include/trace/events/dma.h +++ b/include/trace/events/dma.h @@ -73,7 +73,6 @@ DEFINE_EVENT(dma_map, name, \ TP_ARGS(dev, phys_addr, dma_addr, size, dir, attrs)) =20 DEFINE_MAP_EVENT(dma_map_phys); -DEFINE_MAP_EVENT(dma_map_resource); =20 DECLARE_EVENT_CLASS(dma_unmap, TP_PROTO(struct device *dev, dma_addr_t addr, size_t size, @@ -111,7 +110,6 @@ DEFINE_EVENT(dma_unmap, name, \ TP_ARGS(dev, addr, size, dir, attrs)) =20 DEFINE_UNMAP_EVENT(dma_unmap_phys); -DEFINE_UNMAP_EVENT(dma_unmap_resource); =20 DECLARE_EVENT_CLASS(dma_alloc_class, TP_PROTO(struct device *dev, void *virt_addr, dma_addr_t dma_addr, diff --git a/kernel/dma/debug.c b/kernel/dma/debug.c index da6734e3a4ce..06e31fd216e3 100644 --- a/kernel/dma/debug.c +++ b/kernel/dma/debug.c @@ -38,7 +38,6 @@ enum { dma_debug_single, dma_debug_sg, dma_debug_coherent, - dma_debug_resource, dma_debug_phy, }; =20 @@ -141,7 +140,6 @@ static const char *type2name[] =3D { [dma_debug_single] =3D "single", [dma_debug_sg] =3D "scatter-gather", [dma_debug_coherent] =3D "coherent", - [dma_debug_resource] =3D "resource", [dma_debug_phy] =3D "phy", }; =20 @@ -1448,47 +1446,6 @@ void debug_dma_free_coherent(struct device *dev, siz= e_t size, check_unmap(&ref); } =20 -void debug_dma_map_resource(struct device *dev, phys_addr_t addr, size_t s= ize, - int direction, dma_addr_t dma_addr, - unsigned long attrs) -{ - struct dma_debug_entry *entry; - - if (unlikely(dma_debug_disabled())) - return; - - entry =3D dma_entry_alloc(); - if (!entry) - return; - - entry->type =3D dma_debug_resource; - entry->dev =3D dev; - entry->paddr =3D addr; - entry->size =3D size; - entry->dev_addr =3D dma_addr; - entry->direction =3D direction; - entry->map_err_type =3D MAP_ERR_NOT_CHECKED; - - add_dma_entry(entry, attrs); -} - -void debug_dma_unmap_resource(struct device *dev, dma_addr_t dma_addr, - size_t size, int direction) -{ - struct dma_debug_entry ref =3D { - .type =3D dma_debug_resource, - .dev =3D dev, - .dev_addr =3D dma_addr, - .size =3D size, - .direction =3D direction, - }; - - if (unlikely(dma_debug_disabled())) - return; - - check_unmap(&ref); -} - void debug_dma_sync_single_for_cpu(struct device *dev, dma_addr_t dma_hand= le, size_t size, int direction) { diff --git a/kernel/dma/debug.h b/kernel/dma/debug.h index 76adb42bffd5..424b8f912ade 100644 --- a/kernel/dma/debug.h +++ b/kernel/dma/debug.h @@ -30,14 +30,6 @@ extern void debug_dma_alloc_coherent(struct device *dev,= size_t size, extern void debug_dma_free_coherent(struct device *dev, size_t size, void *virt, dma_addr_t addr); =20 -extern void debug_dma_map_resource(struct device *dev, phys_addr_t addr, - size_t size, int direction, - dma_addr_t dma_addr, - unsigned long attrs); - -extern void debug_dma_unmap_resource(struct device *dev, dma_addr_t dma_ad= dr, - size_t size, int direction); - extern void debug_dma_sync_single_for_cpu(struct device *dev, dma_addr_t dma_handle, size_t size, int direction); @@ -88,19 +80,6 @@ static inline void debug_dma_free_coherent(struct device= *dev, size_t size, { } =20 -static inline void debug_dma_map_resource(struct device *dev, phys_addr_t = addr, - size_t size, int direction, - dma_addr_t dma_addr, - unsigned long attrs) -{ -} - -static inline void debug_dma_unmap_resource(struct device *dev, - dma_addr_t dma_addr, size_t size, - int direction) -{ -} - static inline void debug_dma_sync_single_for_cpu(struct device *dev, dma_addr_t dma_handle, size_t size, int direction) diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index fa75e3070073..1062caac47e7 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -502,22 +502,6 @@ int dma_direct_map_sg(struct device *dev, struct scatt= erlist *sgl, int nents, return ret; } =20 -dma_addr_t dma_direct_map_resource(struct device *dev, phys_addr_t paddr, - size_t size, enum dma_data_direction dir, unsigned long attrs) -{ - dma_addr_t dma_addr =3D paddr; - - if (unlikely(!dma_capable(dev, dma_addr, size, false))) { - dev_err_once(dev, - "DMA addr %pad+%zu overflow (mask %llx, bus limit %llx).\n", - &dma_addr, size, *dev->dma_mask, dev->bus_dma_limit); - WARN_ON_ONCE(1); - return DMA_MAPPING_ERROR; - } - - return dma_addr; -} - int dma_direct_get_sgtable(struct device *dev, struct sg_table *sgt, void *cpu_addr, dma_addr_t dma_addr, size_t size, unsigned long attrs) diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c index fdabfdaeff1d..0ca098d2e88d 100644 --- a/kernel/dma/mapping.c +++ b/kernel/dma/mapping.c @@ -152,12 +152,10 @@ static inline bool dma_map_direct(struct device *dev, return dma_go_direct(dev, *dev->dma_mask, ops); } =20 -dma_addr_t dma_map_page_attrs(struct device *dev, struct page *page, - size_t offset, size_t size, enum dma_data_direction dir, - unsigned long attrs) +dma_addr_t dma_map_phys(struct device *dev, phys_addr_t phys, size_t size, + enum dma_data_direction dir, unsigned long attrs) { const struct dma_map_ops *ops =3D get_dma_ops(dev); - phys_addr_t phys =3D page_to_phys(page) + offset; bool is_mmio =3D attrs & DMA_ATTR_MMIO; dma_addr_t addr; =20 @@ -177,6 +175,9 @@ dma_addr_t dma_map_page_attrs(struct device *dev, struc= t page *page, =20 addr =3D ops->map_resource(dev, phys, size, dir, attrs); } else { + struct page *page =3D phys_to_page(phys); + size_t offset =3D offset_in_page(phys); + /* * The dma_ops API contract for ops->map_page() requires * kmappable memory, while ops->map_resource() does not. @@ -191,9 +192,26 @@ dma_addr_t dma_map_page_attrs(struct device *dev, stru= ct page *page, =20 return addr; } +EXPORT_SYMBOL_GPL(dma_map_phys); + +dma_addr_t dma_map_page_attrs(struct device *dev, struct page *page, + size_t offset, size_t size, enum dma_data_direction dir, + unsigned long attrs) +{ + phys_addr_t phys =3D page_to_phys(page) + offset; + + if (unlikely(attrs & DMA_ATTR_MMIO)) + return DMA_MAPPING_ERROR; + + if (IS_ENABLED(CONFIG_DMA_API_DEBUG) && + WARN_ON_ONCE(is_zone_device_page(page))) + return DMA_MAPPING_ERROR; + + return dma_map_phys(dev, phys, size, dir, attrs); +} EXPORT_SYMBOL(dma_map_page_attrs); =20 -void dma_unmap_page_attrs(struct device *dev, dma_addr_t addr, size_t size, +void dma_unmap_phys(struct device *dev, dma_addr_t addr, size_t size, enum dma_data_direction dir, unsigned long attrs) { const struct dma_map_ops *ops =3D get_dma_ops(dev); @@ -213,6 +231,16 @@ void dma_unmap_page_attrs(struct device *dev, dma_addr= _t addr, size_t size, trace_dma_unmap_phys(dev, addr, size, dir, attrs); debug_dma_unmap_phys(dev, addr, size, dir); } +EXPORT_SYMBOL_GPL(dma_unmap_phys); + +void dma_unmap_page_attrs(struct device *dev, dma_addr_t addr, size_t size, + enum dma_data_direction dir, unsigned long attrs) +{ + if (unlikely(attrs & DMA_ATTR_MMIO)) + return; + + dma_unmap_phys(dev, addr, size, dir, attrs); +} EXPORT_SYMBOL(dma_unmap_page_attrs); =20 static int __dma_map_sg_attrs(struct device *dev, struct scatterlist *sg, @@ -338,41 +366,18 @@ EXPORT_SYMBOL(dma_unmap_sg_attrs); dma_addr_t dma_map_resource(struct device *dev, phys_addr_t phys_addr, size_t size, enum dma_data_direction dir, unsigned long attrs) { - const struct dma_map_ops *ops =3D get_dma_ops(dev); - dma_addr_t addr =3D DMA_MAPPING_ERROR; - - BUG_ON(!valid_dma_direction(dir)); - - if (WARN_ON_ONCE(!dev->dma_mask)) + if (IS_ENABLED(CONFIG_DMA_API_DEBUG) && + WARN_ON_ONCE(pfn_valid(PHYS_PFN(phys_addr)))) return DMA_MAPPING_ERROR; =20 - if (dma_map_direct(dev, ops)) - addr =3D dma_direct_map_resource(dev, phys_addr, size, dir, attrs); - else if (use_dma_iommu(dev)) - addr =3D iommu_dma_map_resource(dev, phys_addr, size, dir, attrs); - else if (ops->map_resource) - addr =3D ops->map_resource(dev, phys_addr, size, dir, attrs); - - trace_dma_map_resource(dev, phys_addr, addr, size, dir, attrs); - debug_dma_map_resource(dev, phys_addr, size, dir, addr, attrs); - return addr; + return dma_map_phys(dev, phys_addr, size, dir, attrs | DMA_ATTR_MMIO); } EXPORT_SYMBOL(dma_map_resource); =20 void dma_unmap_resource(struct device *dev, dma_addr_t addr, size_t size, enum dma_data_direction dir, unsigned long attrs) { - const struct dma_map_ops *ops =3D get_dma_ops(dev); - - BUG_ON(!valid_dma_direction(dir)); - if (dma_map_direct(dev, ops)) - ; /* nothing to do: uncached and no swiotlb */ - else if (use_dma_iommu(dev)) - iommu_dma_unmap_resource(dev, addr, size, dir, attrs); - else if (ops->unmap_resource) - ops->unmap_resource(dev, addr, size, dir, attrs); - trace_dma_unmap_resource(dev, addr, size, dir, attrs); - debug_dma_unmap_resource(dev, addr, size, dir); + dma_unmap_phys(dev, addr, size, dir, attrs | DMA_ATTR_MMIO); } EXPORT_SYMBOL(dma_unmap_resource); =20 --=20 2.50.1 From nobody Sat Oct 4 06:33:20 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B236E30C375; Tue, 19 Aug 2025 17:38:36 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1755625116; cv=none; b=e2P+3kDTJD3GQhAsrSiD76BUVm2UzUlWBVjU6qVHsa7Gyz/87XSwWkiV2IcCbhEK7jQ2ts2L0UG5uJ6c+GHkX1Jlg5IcQPhDyxFAFONlrqGpjiu5G7AYB5QO9LQYWTFHl9Uc9eiNbg6QBDOTW9OWTmiJUKBZIUZBm+EwxBZkUwI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1755625116; c=relaxed/simple; bh=O9XZOHlHvjORXmOt4kTIQrFLHTrONFqw1ae8lUbcYw8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=GlgjwqtZdUdHzHkgSTrfFAg3B2xufJINJYzVfVMJeIyg/xLDmVvzSqr17O/oYt6gMCWcRtiBZO5RXm14sEo5oQMifY/iB6+JHxidUPV1tnFLUaEzDmzeCQDPUVy0N0e5/U1pktqgyXVLxI2Fd83r8EACWhTcWMcEbXLaizqnM/g= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=ATZH+tt0; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="ATZH+tt0" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9165BC4CEF1; Tue, 19 Aug 2025 17:38:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1755625116; bh=O9XZOHlHvjORXmOt4kTIQrFLHTrONFqw1ae8lUbcYw8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ATZH+tt0E0CIypcrTSNKTLVbz11CKQWPuWVv8Uj9hFXpVu8WhMldVqGJubdEEOKA5 ihrhveF/7PK4ciLaP1743VsR6NY/mkFLhxNufmCXwfsaz906MGNVzEujW1yZCGg7UJ V0f9Op2bM/LDCmIDoTw89nki6nqBzfilToxhqaVR8w0WmvQL8B8EY2oxWuG20cX/6j TL1fryJQsGuMrJZkOQ1NbVtcj71sbARhi2TXWmqIIZ9BTi18mEhP8usOCf32R7gnLD uaTJdR9D9iuPFRWq/Gj2stVOJgLdUiBmlsVIk3TNaKhBqMgGSVJIQ5h9MDirkdjHNx bZTvCl5dJvTcQ== From: Leon Romanovsky To: Marek Szyprowski Cc: Leon Romanovsky , Jason Gunthorpe , Abdiel Janulgue , Alexander Potapenko , Alex Gaynor , Andrew Morton , Christoph Hellwig , Danilo Krummrich , iommu@lists.linux.dev, Jason Wang , Jens Axboe , Joerg Roedel , Jonathan Corbet , Juergen Gross , kasan-dev@googlegroups.com, Keith Busch , linux-block@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-nvme@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-trace-kernel@vger.kernel.org, Madhavan Srinivasan , Masami Hiramatsu , Michael Ellerman , "Michael S. Tsirkin" , Miguel Ojeda , Robin Murphy , rust-for-linux@vger.kernel.org, Sagi Grimberg , Stefano Stabellini , Steven Rostedt , virtualization@lists.linux.dev, Will Deacon , xen-devel@lists.xenproject.org Subject: [PATCH v4 12/16] mm/hmm: migrate to physical address-based DMA mapping API Date: Tue, 19 Aug 2025 20:36:56 +0300 Message-ID: <18165db0ff83f8222bfd05c4807cda206bec02f7.1755624249.git.leon@kernel.org> X-Mailer: git-send-email 2.50.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Leon Romanovsky Convert HMM DMA operations from the legacy page-based API to the new physical address-based dma_map_phys() and dma_unmap_phys() functions. This demonstrates the preferred approach for new code that should use physical addresses directly rather than page+offset parameters. The change replaces dma_map_page() and dma_unmap_page() calls with dma_map_phys() and dma_unmap_phys() respectively, using the physical address that was already available in the code. This eliminates the redundant page-to-physical address conversion and aligns with the DMA subsystem's move toward physical address-centric interfaces. This serves as an example of how new code should be written to leverage the more efficient physical address API, which provides cleaner interfaces for drivers that already have access to physical addresses. Reviewed-by: Jason Gunthorpe Signed-off-by: Leon Romanovsky --- mm/hmm.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/mm/hmm.c b/mm/hmm.c index d545e2494994..015ab243f081 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -775,8 +775,8 @@ dma_addr_t hmm_dma_map_pfn(struct device *dev, struct h= mm_dma_map *map, if (WARN_ON_ONCE(dma_need_unmap(dev) && !dma_addrs)) goto error; =20 - dma_addr =3D dma_map_page(dev, page, 0, map->dma_entry_size, - DMA_BIDIRECTIONAL); + dma_addr =3D dma_map_phys(dev, paddr, map->dma_entry_size, + DMA_BIDIRECTIONAL, 0); if (dma_mapping_error(dev, dma_addr)) goto error; =20 @@ -819,8 +819,8 @@ bool hmm_dma_unmap_pfn(struct device *dev, struct hmm_d= ma_map *map, size_t idx) dma_iova_unlink(dev, state, idx * map->dma_entry_size, map->dma_entry_size, DMA_BIDIRECTIONAL, attrs); } else if (dma_need_unmap(dev)) - dma_unmap_page(dev, dma_addrs[idx], map->dma_entry_size, - DMA_BIDIRECTIONAL); + dma_unmap_phys(dev, dma_addrs[idx], map->dma_entry_size, + DMA_BIDIRECTIONAL, 0); =20 pfns[idx] &=3D ~(HMM_PFN_DMA_MAPPED | HMM_PFN_P2PDMA | HMM_PFN_P2PDMA_BUS); --=20 2.50.1 From nobody Sat Oct 4 06:33:20 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DAAE830DD0E; Tue, 19 Aug 2025 17:38:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1755625129; cv=none; b=UUyNoXNMPb+BoJzZ0DjYmD1GfYL3rwpJOULaI3D94hOMgOpdLw9pOCRI4JPSWJkdSwD4WpEZN2a+wGNGbtxzCO1+2Cye+F9HbKjNUOixe5qQFyX4btUtaAfYQq9RJlKIvDKnF8cornouyolJrc4SSOlIIRSbDwRKFhsMHFgdsq0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1755625129; c=relaxed/simple; bh=5GnXky7X8jRP4E4M761xyU7kxSqR5w3y9HvcAnisW/0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=rleDDCo16fD715CCKdmBPvsBBYzTBXiNHQdmbE9q/jpTcCQC2V0yssyWfRpkAa62iKyhp6Fwz1QdNOVrRov1AeTXLgfLZqHIjIEoSCLQ1No5TmsMHo5NENSYIaKxyC1sg5zT6vULCkW1QvS5IzI07syVKtfOBQQzu8/aGpCAKeM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=Bp6IjwCK; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="Bp6IjwCK" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7B8E5C4CEF4; Tue, 19 Aug 2025 17:38:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1755625128; bh=5GnXky7X8jRP4E4M761xyU7kxSqR5w3y9HvcAnisW/0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Bp6IjwCK6iikZBNKRNzlwQSTkIdVOg07zMkHxMwHx2bN8ViJPSPzqSH/QAyq4vh06 wQO6Ewlb0QQbKPmgYqrA9uKQ5ZlVlzrl395y0zlmmQVZo3cmxr2dce8aTWrEgEuVvU PK81BrhFBj0Sb03L1mOYr5A5QlqVfAMBpJE1psdYb9zwXYegwG/tfmrm5FOWTNfeE8 iY+rPNAKXFYUB6cprmGsa0u1KrjeSM4hl6Yb32H9zVNhuHEbrWVA9oc/K4UQWl86vA 0GIK4Y52fniR/iFJCHHd5h1oBGaE5KSvHvvy9rnqAMrstqGlP8VTJe6ApXPe+l+dK9 yBb54yNAL7EtQ== From: Leon Romanovsky To: Marek Szyprowski Cc: Leon Romanovsky , Jason Gunthorpe , Abdiel Janulgue , Alexander Potapenko , Alex Gaynor , Andrew Morton , Christoph Hellwig , Danilo Krummrich , iommu@lists.linux.dev, Jason Wang , Jens Axboe , Joerg Roedel , Jonathan Corbet , Juergen Gross , kasan-dev@googlegroups.com, Keith Busch , linux-block@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-nvme@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-trace-kernel@vger.kernel.org, Madhavan Srinivasan , Masami Hiramatsu , Michael Ellerman , "Michael S. Tsirkin" , Miguel Ojeda , Robin Murphy , rust-for-linux@vger.kernel.org, Sagi Grimberg , Stefano Stabellini , Steven Rostedt , virtualization@lists.linux.dev, Will Deacon , xen-devel@lists.xenproject.org Subject: [PATCH v4 13/16] mm/hmm: properly take MMIO path Date: Tue, 19 Aug 2025 20:36:57 +0300 Message-ID: <4b929da0b2dec4bccf489f35ee06098b437053b2.1755624249.git.leon@kernel.org> X-Mailer: git-send-email 2.50.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Leon Romanovsky In case peer-to-peer transaction traverses through host bridge, the IOMMU needs to have IOMMU_MMIO flag, together with skip of CPU sync. The latter was handled by provided DMA_ATTR_SKIP_CPU_SYNC flag, but IOMMU flag was missed, due to assumption that such memory can be treated as regular one. Reuse newly introduced DMA attribute to properly take MMIO path. Reviewed-by: Jason Gunthorpe Signed-off-by: Leon Romanovsky --- mm/hmm.c | 15 ++++++++------- 1 file changed, 8 insertions(+), 7 deletions(-) diff --git a/mm/hmm.c b/mm/hmm.c index 015ab243f081..6556c0e074ba 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -746,7 +746,7 @@ dma_addr_t hmm_dma_map_pfn(struct device *dev, struct h= mm_dma_map *map, case PCI_P2PDMA_MAP_NONE: break; case PCI_P2PDMA_MAP_THRU_HOST_BRIDGE: - attrs |=3D DMA_ATTR_SKIP_CPU_SYNC; + attrs |=3D DMA_ATTR_MMIO; pfns[idx] |=3D HMM_PFN_P2PDMA; break; case PCI_P2PDMA_MAP_BUS_ADDR: @@ -776,7 +776,7 @@ dma_addr_t hmm_dma_map_pfn(struct device *dev, struct h= mm_dma_map *map, goto error; =20 dma_addr =3D dma_map_phys(dev, paddr, map->dma_entry_size, - DMA_BIDIRECTIONAL, 0); + DMA_BIDIRECTIONAL, attrs); if (dma_mapping_error(dev, dma_addr)) goto error; =20 @@ -811,16 +811,17 @@ bool hmm_dma_unmap_pfn(struct device *dev, struct hmm= _dma_map *map, size_t idx) if ((pfns[idx] & valid_dma) !=3D valid_dma) return false; =20 + if (pfns[idx] & HMM_PFN_P2PDMA) + attrs |=3D DMA_ATTR_MMIO; + if (pfns[idx] & HMM_PFN_P2PDMA_BUS) ; /* no need to unmap bus address P2P mappings */ - else if (dma_use_iova(state)) { - if (pfns[idx] & HMM_PFN_P2PDMA) - attrs |=3D DMA_ATTR_SKIP_CPU_SYNC; + else if (dma_use_iova(state)) dma_iova_unlink(dev, state, idx * map->dma_entry_size, map->dma_entry_size, DMA_BIDIRECTIONAL, attrs); - } else if (dma_need_unmap(dev)) + else if (dma_need_unmap(dev)) dma_unmap_phys(dev, dma_addrs[idx], map->dma_entry_size, - DMA_BIDIRECTIONAL, 0); + DMA_BIDIRECTIONAL, attrs); =20 pfns[idx] &=3D ~(HMM_PFN_DMA_MAPPED | HMM_PFN_P2PDMA | HMM_PFN_P2PDMA_BUS); --=20 2.50.1 From nobody Sat Oct 4 06:33:20 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6EE6F30DD0E; Tue, 19 Aug 2025 17:38:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1755625122; cv=none; b=SF5JoCM6QzZ5fwt/cKcbqeWnFl+6e39nDRAp73d1ZRlM4MFw3jWCUqNo1R/slq8ZMXTlPFP2WDILHvf2mz75qASMX8npFFJDzehKQaS8LE5aTQ/A2nx+/ThsPcuSo4eCpmiCl7ApbpeN6vRQrXThTc2MUwNILkVFEEkUWVCVOXg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1755625122; c=relaxed/simple; bh=stLVV7BdINySQbyeqCDmRbXyWeQEecbuIkQ/rhX45FU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=oWEzRhCtUh+U775B2mK36yNk/yXPNVgTKfMLpoLTlEA9KzO7US5o4E5cIhDW8lV6HevOex7WtbiQtEb2uon2m0WkTMOGcFHMZGtVGqputI8S1kMLMgcfo2z9Y2ZFtiGlsRBUIQ21KEEBO/GyNHVsPUJ1LzjfWnsc1J2iUKOsmQw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=QF95AAuh; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="QF95AAuh" Received: by smtp.kernel.org (Postfix) with ESMTPSA id E7E8DC113D0; Tue, 19 Aug 2025 17:38:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1755625122; bh=stLVV7BdINySQbyeqCDmRbXyWeQEecbuIkQ/rhX45FU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=QF95AAuhc8feQn+SEx82Pkl7/uTlNhnB5cHYHMn4haBNk3mIGB32W95xHVkQetyzQ l5pjdHoqmA32dwMVHHl2ireizeQrHrNOOpU6veVyXb4NV0jh1tYHCTnC8dcn2zvtTY J4nrUx7NhDiTec8s4jDArKOAkTIjbwvKXPPen+Vd16+uFXzd7usqZLmgr9JbeXZ1Jn WjodL0NO0sJNloB+WXRZ6xIN7Ll+mPyFWx7P66aJjQZw9otjgjVcZKNrK0y+CcuICc ddwwW0q2gx9LzWpv+xxfN0uj6Ze8uJX3Jlyv58Oh1O5dwZ9g8LdLLurbrm7l0u9gfU vhh2Z4HWzRNIw== From: Leon Romanovsky To: Marek Szyprowski Cc: Leon Romanovsky , Jason Gunthorpe , Abdiel Janulgue , Alexander Potapenko , Alex Gaynor , Andrew Morton , Christoph Hellwig , Danilo Krummrich , iommu@lists.linux.dev, Jason Wang , Jens Axboe , Joerg Roedel , Jonathan Corbet , Juergen Gross , kasan-dev@googlegroups.com, Keith Busch , linux-block@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-nvme@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-trace-kernel@vger.kernel.org, Madhavan Srinivasan , Masami Hiramatsu , Michael Ellerman , "Michael S. Tsirkin" , Miguel Ojeda , Robin Murphy , rust-for-linux@vger.kernel.org, Sagi Grimberg , Stefano Stabellini , Steven Rostedt , virtualization@lists.linux.dev, Will Deacon , xen-devel@lists.xenproject.org Subject: [PATCH v4 14/16] block-dma: migrate to dma_map_phys instead of map_page Date: Tue, 19 Aug 2025 20:36:58 +0300 Message-ID: <22b824931bc8ba090979ab902e4c1c2ec8327b65.1755624249.git.leon@kernel.org> X-Mailer: git-send-email 2.50.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Leon Romanovsky After introduction of dma_map_phys(), there is no need to convert from physical address to struct page in order to map page. So let's use it directly. Signed-off-by: Leon Romanovsky Reviewed-by: Keith Busch --- block/blk-mq-dma.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/block/blk-mq-dma.c b/block/blk-mq-dma.c index ad283017caef..37e2142be4f7 100644 --- a/block/blk-mq-dma.c +++ b/block/blk-mq-dma.c @@ -87,8 +87,8 @@ static bool blk_dma_map_bus(struct blk_dma_iter *iter, st= ruct phys_vec *vec) static bool blk_dma_map_direct(struct request *req, struct device *dma_dev, struct blk_dma_iter *iter, struct phys_vec *vec) { - iter->addr =3D dma_map_page(dma_dev, phys_to_page(vec->paddr), - offset_in_page(vec->paddr), vec->len, rq_dma_dir(req)); + iter->addr =3D dma_map_phys(dma_dev, vec->paddr, vec->len, + rq_dma_dir(req), 0); if (dma_mapping_error(dma_dev, iter->addr)) { iter->status =3D BLK_STS_RESOURCE; return false; --=20 2.50.1 From nobody Sat Oct 4 06:33:20 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4231730F544; Tue, 19 Aug 2025 17:38:54 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1755625135; cv=none; b=u9k2P7blq9w+ggYXWj/JI4tohzpSW87pQOocuH2NyiW7Xam00Le3M6nA2PVum+FgMBH52Bm/Eyh1wP/iJsroy3OHnRZ+P+iV2opiFpvJGg6gRgHQU25u/dkMDMK5naJoo4Hact0G/ZTPQS9n5dPIxw0bUC7L+b+MfJoELnyZS3Y= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1755625135; c=relaxed/simple; bh=VNpaHZYPDKoIVZk7JizQ+fAc7bXMl6IIrMQIwsKC7lg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Xm8xEpBjI76kKfLwTwy4iEudhUIAyKlEa4wi3CmHXUhH85zg0wJ3bu6jtgkQr5JzHVQsicsw0qNCpT/+CLEJDQuSOsbSP9ZWy/9qreWsReaq4xSUrLqBUGJ24DyFr/JQ175wuCTzV5Xr5S33FxCUIOnCzeObnEuj6LloJc5jvi8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=IyxkIeqE; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="IyxkIeqE" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 79912C4CEF1; Tue, 19 Aug 2025 17:38:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1755625134; bh=VNpaHZYPDKoIVZk7JizQ+fAc7bXMl6IIrMQIwsKC7lg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=IyxkIeqET61j6L28Kxhyyfq3v5aCBRMD3pest0PD5Wdrjee2qJOVX3wtGKvoyv4Jf fKb8DM9QeSElkknNC5Yn0qU4ikvLVS6YKQgcTDs4LrxIuG7ahLDDXTNguINLme7bL8 R45/81G/h5t5pFX5HgPXSAhQHYQzTElWkUeH+eN1xfrfgs/fdu6WAy3SQecMOmE4la PqwB4Ypx82fCtwySYC38Ou7g6EFId6EEfFxK7AajpjwqYntToNbsCqOWjya3ep9MbZ 2xwiFtcEYrAsYFB002LS7F3RqkXxFxSvXOROm6dSDOFh1rkCW2YvNV3Q8WIt1qnxYY 2+4jMjfrMc56w== From: Leon Romanovsky To: Marek Szyprowski Cc: Leon Romanovsky , Jason Gunthorpe , Abdiel Janulgue , Alexander Potapenko , Alex Gaynor , Andrew Morton , Christoph Hellwig , Danilo Krummrich , iommu@lists.linux.dev, Jason Wang , Jens Axboe , Joerg Roedel , Jonathan Corbet , Juergen Gross , kasan-dev@googlegroups.com, Keith Busch , linux-block@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-nvme@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-trace-kernel@vger.kernel.org, Madhavan Srinivasan , Masami Hiramatsu , Michael Ellerman , "Michael S. Tsirkin" , Miguel Ojeda , Robin Murphy , rust-for-linux@vger.kernel.org, Sagi Grimberg , Stefano Stabellini , Steven Rostedt , virtualization@lists.linux.dev, Will Deacon , xen-devel@lists.xenproject.org Subject: [PATCH v4 15/16] block-dma: properly take MMIO path Date: Tue, 19 Aug 2025 20:36:59 +0300 Message-ID: <642dbeb7aa94257eaea71ec63c06e3f939270023.1755624249.git.leon@kernel.org> X-Mailer: git-send-email 2.50.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Leon Romanovsky Make sure that CPU is not synced and IOMMU is configured to take MMIO path by providing newly introduced DMA_ATTR_MMIO attribute. Signed-off-by: Leon Romanovsky Reviewed-by: Keith Busch --- block/blk-mq-dma.c | 13 +++++++++++-- include/linux/blk-mq-dma.h | 6 +++++- include/linux/blk_types.h | 2 ++ 3 files changed, 18 insertions(+), 3 deletions(-) diff --git a/block/blk-mq-dma.c b/block/blk-mq-dma.c index 37e2142be4f7..d415088ed9fd 100644 --- a/block/blk-mq-dma.c +++ b/block/blk-mq-dma.c @@ -87,8 +87,13 @@ static bool blk_dma_map_bus(struct blk_dma_iter *iter, s= truct phys_vec *vec) static bool blk_dma_map_direct(struct request *req, struct device *dma_dev, struct blk_dma_iter *iter, struct phys_vec *vec) { + unsigned int attrs =3D 0; + + if (req->cmd_flags & REQ_MMIO) + attrs =3D DMA_ATTR_MMIO; + iter->addr =3D dma_map_phys(dma_dev, vec->paddr, vec->len, - rq_dma_dir(req), 0); + rq_dma_dir(req), attrs); if (dma_mapping_error(dma_dev, iter->addr)) { iter->status =3D BLK_STS_RESOURCE; return false; @@ -103,14 +108,17 @@ static bool blk_rq_dma_map_iova(struct request *req, = struct device *dma_dev, { enum dma_data_direction dir =3D rq_dma_dir(req); unsigned int mapped =3D 0; + unsigned int attrs =3D 0; int error; =20 iter->addr =3D state->addr; iter->len =3D dma_iova_size(state); + if (req->cmd_flags & REQ_MMIO) + attrs =3D DMA_ATTR_MMIO; =20 do { error =3D dma_iova_link(dma_dev, state, vec->paddr, mapped, - vec->len, dir, 0); + vec->len, dir, attrs); if (error) break; mapped +=3D vec->len; @@ -176,6 +184,7 @@ bool blk_rq_dma_map_iter_start(struct request *req, str= uct device *dma_dev, * same as non-P2P transfers below and during unmap. */ req->cmd_flags &=3D ~REQ_P2PDMA; + req->cmd_flags |=3D REQ_MMIO; break; default: iter->status =3D BLK_STS_INVAL; diff --git a/include/linux/blk-mq-dma.h b/include/linux/blk-mq-dma.h index c26a01aeae00..6c55f5e58511 100644 --- a/include/linux/blk-mq-dma.h +++ b/include/linux/blk-mq-dma.h @@ -48,12 +48,16 @@ static inline bool blk_rq_dma_map_coalesce(struct dma_i= ova_state *state) static inline bool blk_rq_dma_unmap(struct request *req, struct device *dm= a_dev, struct dma_iova_state *state, size_t mapped_len) { + unsigned int attrs =3D 0; + if (req->cmd_flags & REQ_P2PDMA) return true; =20 if (dma_use_iova(state)) { + if (req->cmd_flags & REQ_MMIO) + attrs =3D DMA_ATTR_MMIO; dma_iova_destroy(dma_dev, state, mapped_len, rq_dma_dir(req), - 0); + attrs); return true; } =20 diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h index 09b99d52fd36..283058bcb5b1 100644 --- a/include/linux/blk_types.h +++ b/include/linux/blk_types.h @@ -387,6 +387,7 @@ enum req_flag_bits { __REQ_FS_PRIVATE, /* for file system (submitter) use */ __REQ_ATOMIC, /* for atomic write operations */ __REQ_P2PDMA, /* contains P2P DMA pages */ + __REQ_MMIO, /* contains MMIO memory */ /* * Command specific flags, keep last: */ @@ -420,6 +421,7 @@ enum req_flag_bits { #define REQ_FS_PRIVATE (__force blk_opf_t)(1ULL << __REQ_FS_PRIVATE) #define REQ_ATOMIC (__force blk_opf_t)(1ULL << __REQ_ATOMIC) #define REQ_P2PDMA (__force blk_opf_t)(1ULL << __REQ_P2PDMA) +#define REQ_MMIO (__force blk_opf_t)(1ULL << __REQ_MMIO) =20 #define REQ_NOUNMAP (__force blk_opf_t)(1ULL << __REQ_NOUNMAP) =20 --=20 2.50.1 From nobody Sat Oct 4 06:33:20 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2AB3430F520; Tue, 19 Aug 2025 17:39:01 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1755625141; cv=none; b=kAixRL4my1nXHgx4h17OkFiaciHjmCGgLreyvxQVsp7kO+LmVicMLl+uQkEpbnHo/O0yS0ZPt+aKBEJGIyW/ldkFir4wA3C96azFsdHfEruphTX+8ofj8cKNRFkaGJoUs1ByCz4a4yTNAb2+20aIZrJc+m4vkFgXe8w2Sv2Xt5g= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1755625141; c=relaxed/simple; bh=shzOb9LxCd4j9ffIAYJARd3Ll/bIHN63Rdsm8jF6HN0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=li0Za4uyQ1tRHewOlMQfUPuJnpj4SP03LGx95e8pS0bf5vc9USfxR30f4RbLCaG7i6WpqJqQJr647uHv1wtIrYafFjGsgwcOIwQw5iZ2XfCkjgNvWqWR5vV3zvSkJHc90m4vGxV607ft0z3NZp/IE7Hfhja8OC1b6rFiTowvLqk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=hP0G8Q00; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="hP0G8Q00" Received: by smtp.kernel.org (Postfix) with ESMTPSA id A44ECC113D0; Tue, 19 Aug 2025 17:38:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1755625141; bh=shzOb9LxCd4j9ffIAYJARd3Ll/bIHN63Rdsm8jF6HN0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=hP0G8Q00yNTh5FfM8EvmXjWXPCW1VAkoQrIqWO2opf8nY5Pp8kyRoAQ5sJbskFxHe Dpw2oE6bb6R9DP3zDSAoqrpgkr4LknFTjksf2ZVtrQBA00wb60mjNgoeDWPO1iTMvR mOZkqJAPBVE7i3vpf3SqAKzTQw9K5Rc7DptngH5viddKvezFNMaYxTj51UPsnXjIa2 zH66GE5Ky5jjWEuNnqpxi+DfDo3WTnC7VWe1CnDp489D3vz6rDN4s4/RlMVvekkmmH q8YXIBSj9tnBwBYbvKw41cRhfgSA56D3XB40ha8nKdeMey7WqJRapAA4sbPWPZyDe2 vdbwCoOpeQYmA== From: Leon Romanovsky To: Marek Szyprowski Cc: Leon Romanovsky , Jason Gunthorpe , Abdiel Janulgue , Alexander Potapenko , Alex Gaynor , Andrew Morton , Christoph Hellwig , Danilo Krummrich , iommu@lists.linux.dev, Jason Wang , Jens Axboe , Joerg Roedel , Jonathan Corbet , Juergen Gross , kasan-dev@googlegroups.com, Keith Busch , linux-block@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-nvme@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-trace-kernel@vger.kernel.org, Madhavan Srinivasan , Masami Hiramatsu , Michael Ellerman , "Michael S. Tsirkin" , Miguel Ojeda , Robin Murphy , rust-for-linux@vger.kernel.org, Sagi Grimberg , Stefano Stabellini , Steven Rostedt , virtualization@lists.linux.dev, Will Deacon , xen-devel@lists.xenproject.org Subject: [PATCH v4 16/16] nvme-pci: unmap MMIO pages with appropriate interface Date: Tue, 19 Aug 2025 20:37:00 +0300 Message-ID: <545fffb8c364f36102919a5a1d57137731409f3c.1755624249.git.leon@kernel.org> X-Mailer: git-send-email 2.50.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Leon Romanovsky Block layer maps MMIO memory through dma_map_phys() interface with help of DMA_ATTR_MMIO attribute. There is a need to unmap that memory with the appropriate unmap function, something which wasn't possible before adding new REQ attribute to block layer in previous patch. Signed-off-by: Leon Romanovsky Reviewed-by: Keith Busch --- drivers/nvme/host/pci.c | 18 +++++++++++++----- 1 file changed, 13 insertions(+), 5 deletions(-) diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c index 2c6d9506b172..f8ecc0e0f576 100644 --- a/drivers/nvme/host/pci.c +++ b/drivers/nvme/host/pci.c @@ -682,11 +682,15 @@ static void nvme_free_prps(struct request *req) { struct nvme_iod *iod =3D blk_mq_rq_to_pdu(req); struct nvme_queue *nvmeq =3D req->mq_hctx->driver_data; + unsigned int attrs =3D 0; unsigned int i; =20 + if (req->cmd_flags & REQ_MMIO) + attrs =3D DMA_ATTR_MMIO; + for (i =3D 0; i < iod->nr_dma_vecs; i++) - dma_unmap_page(nvmeq->dev->dev, iod->dma_vecs[i].addr, - iod->dma_vecs[i].len, rq_dma_dir(req)); + dma_unmap_phys(nvmeq->dev->dev, iod->dma_vecs[i].addr, + iod->dma_vecs[i].len, rq_dma_dir(req), attrs); mempool_free(iod->dma_vecs, nvmeq->dev->dmavec_mempool); } =20 @@ -699,15 +703,19 @@ static void nvme_free_sgls(struct request *req) unsigned int sqe_dma_len =3D le32_to_cpu(iod->cmd.common.dptr.sgl.length); struct nvme_sgl_desc *sg_list =3D iod->descriptors[0]; enum dma_data_direction dir =3D rq_dma_dir(req); + unsigned int attrs =3D 0; + + if (req->cmd_flags & REQ_MMIO) + attrs =3D DMA_ATTR_MMIO; =20 if (iod->nr_descriptors) { unsigned int nr_entries =3D sqe_dma_len / sizeof(*sg_list), i; =20 for (i =3D 0; i < nr_entries; i++) - dma_unmap_page(dma_dev, le64_to_cpu(sg_list[i].addr), - le32_to_cpu(sg_list[i].length), dir); + dma_unmap_phys(dma_dev, le64_to_cpu(sg_list[i].addr), + le32_to_cpu(sg_list[i].length), dir, attrs); } else { - dma_unmap_page(dma_dev, sqe_dma_addr, sqe_dma_len, dir); + dma_unmap_phys(dma_dev, sqe_dma_addr, sqe_dma_len, dir, attrs); } } =20 --=20 2.50.1