From nobody Fri Oct 3 10:10:32 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=quarantine dis=none) header.from=kernel.org ARC-Seal: i=1; a=rsa-sha256; t=1756824576; cv=none; d=zohomail.com; s=zohoarc; b=f0uAachekTBTmeEIfMcI1MtO0Rs3Cmb+BTfeypATRNYxqeomuZ2hvBKb0wWj4NOOYArWfQvfZckWksbDFkUzJ9ZSQJ02ACyzvnXcqmfePtA6rAkGRD9qMw2pxWC0Cvv+pPZP4up39rQpXJCP4rJB45UrMPVaUNP+J3JsdDIC3kw= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1756824576; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=ywULpGaIgw2EWnFsajimLiQWkrdwpqvM3asq0pAoqyU=; b=KxJG9yZtiDoCYOcZXLBaWq19hf00nbt/iCwF8PwcHkZLUQo9pYQp022YK5ywJNfJyWUmt0zyh5yiJnBb1Apa9IvDVUP96movfVMLKFJtnIqpBHPbkocdgtjJGW/VSVFADuzZLvU1oUIqFaRw+AuWBW6HgTNzbFcAyz3243dRfv8= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=quarantine dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1756824576328385.8769008916994; Tue, 2 Sep 2025 07:49:36 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.1106703.1457347 (Exim 4.92) (envelope-from ) id 1utSJy-0002e0-BB; Tue, 02 Sep 2025 14:49:18 +0000 Received: by outflank-mailman (output) from mailman id 1106703.1457347; Tue, 02 Sep 2025 14:49:18 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1utSJy-0002dt-7i; Tue, 02 Sep 2025 14:49:18 +0000 Received: by outflank-mailman (input) for mailman id 1106703; Tue, 02 Sep 2025 14:49:17 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1utSJx-00028O-AF for xen-devel@lists.xenproject.org; Tue, 02 Sep 2025 14:49:17 +0000 Received: from sea.source.kernel.org (sea.source.kernel.org [2600:3c0a:e001:78e:0:1991:8:25]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id fe612948-880b-11f0-8dd7-1b34d833f44b; Tue, 02 Sep 2025 16:49:15 +0200 (CEST) Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sea.source.kernel.org (Postfix) with ESMTP id F0AD744942; Tue, 2 Sep 2025 14:49:13 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2D3AAC4CEED; Tue, 2 Sep 2025 14:49:13 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: fe612948-880b-11f0-8dd7-1b34d833f44b DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1756824553; bh=bQCJOhk6/urs9rnk28jsW9huZ5BmqRf4rlNcWoQochY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=qNA0rpdIfDKcKY8jylpr6rIPXKRrMZ2SSvxdExkc4iYFUg9CGiS5XwAxPyEf8F+qA AFogSmagV9JyRlbdhVx9tx/4ZYBW3H5JGbQk2Cby5UXJj532uQQp72RE6MsPyvuxDQ uroJ+7G99RhoyzZEntv72aqyl+sgfCBGI21ScktmjfktL21+44tukkP/L5aZVh3NC5 iE3uMu+rgCByY9sAMLkloAREC/O3bNxIEONVHjqR+3zFFnDdxFyQ+sUuX5VXRZOo1X 3naF/+0zGlWHD0/zMTwdb3f5RNa+dkqF/arrZ1VZMVnkpGQYpnYYxl5o8UVINu+IDv zbchkle3zvSEQ== From: Leon Romanovsky To: Marek Szyprowski Cc: Leon Romanovsky , Jason Gunthorpe , Abdiel Janulgue , Alexander Potapenko , Alex Gaynor , Andrew Morton , Christoph Hellwig , Danilo Krummrich , David Hildenbrand , iommu@lists.linux.dev, Jason Wang , Jens Axboe , Joerg Roedel , Jonathan Corbet , Juergen Gross , kasan-dev@googlegroups.com, Keith Busch , linux-block@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-nvme@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-trace-kernel@vger.kernel.org, Madhavan Srinivasan , Masami Hiramatsu , Michael Ellerman , "Michael S. Tsirkin" , Miguel Ojeda , Robin Murphy , rust-for-linux@vger.kernel.org, Sagi Grimberg , Stefano Stabellini , Steven Rostedt , virtualization@lists.linux.dev, Will Deacon , xen-devel@lists.xenproject.org Subject: [PATCH v5 01/16] dma-mapping: introduce new DMA attribute to indicate MMIO memory Date: Tue, 2 Sep 2025 17:48:38 +0300 Message-ID: <9cce2a2bf181edacb33151388caa47725f780907.1756822782.git.leon@kernel.org> X-Mailer: git-send-email 2.50.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @kernel.org) X-ZM-MESSAGEID: 1756824578732124100 Content-Type: text/plain; charset="utf-8" From: Leon Romanovsky This patch introduces the DMA_ATTR_MMIO attribute to mark DMA buffers that reside in memory-mapped I/O (MMIO) regions, such as device BARs exposed through the host bridge, which are accessible for peer-to-peer (P2P) DMA. This attribute is especially useful for exporting device memory to other devices for DMA without CPU involvement, and avoids unnecessary or potentially detrimental CPU cache maintenance calls. DMA_ATTR_MMIO is supposed to provide dma_map_resource() functionality without need to call to special function and perform branching when processing generic containers like bio_vec by the callers. Reviewed-by: Jason Gunthorpe Signed-off-by: Leon Romanovsky --- Documentation/core-api/dma-attributes.rst | 18 ++++++++++++++++++ include/linux/dma-mapping.h | 20 ++++++++++++++++++++ include/trace/events/dma.h | 3 ++- rust/kernel/dma.rs | 3 +++ 4 files changed, 43 insertions(+), 1 deletion(-) diff --git a/Documentation/core-api/dma-attributes.rst b/Documentation/core= -api/dma-attributes.rst index 1887d92e8e92..0bdc2be65e57 100644 --- a/Documentation/core-api/dma-attributes.rst +++ b/Documentation/core-api/dma-attributes.rst @@ -130,3 +130,21 @@ accesses to DMA buffers in both privileged "supervisor= " and unprivileged subsystem that the buffer is fully accessible at the elevated privilege level (and ideally inaccessible or at least read-only at the lesser-privileged levels). + +DMA_ATTR_MMIO +------------- + +This attribute indicates the physical address is not normal system +memory. It may not be used with kmap*()/phys_to_virt()/phys_to_page() +functions, it may not be cacheable, and access using CPU load/store +instructions may not be allowed. + +Usually this will be used to describe MMIO addresses, or other non-cacheab= le +register addresses. When DMA mapping this sort of address we call +the operation Peer to Peer as a one device is DMA'ing to another device. +For PCI devices the p2pdma APIs must be used to determine if +DMA_ATTR_MMIO is appropriate. + +For architectures that require cache flushing for DMA coherence +DMA_ATTR_MMIO will not perform any cache flushing. The address +provided must never be mapped cacheable into the CPU. diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h index 55c03e5fe8cb..4254fd9bdf5d 100644 --- a/include/linux/dma-mapping.h +++ b/include/linux/dma-mapping.h @@ -58,6 +58,26 @@ */ #define DMA_ATTR_PRIVILEGED (1UL << 9) =20 +/* + * DMA_ATTR_MMIO - Indicates memory-mapped I/O (MMIO) region for DMA mappi= ng + * + * This attribute indicates the physical address is not normal system + * memory. It may not be used with kmap*()/phys_to_virt()/phys_to_page() + * functions, it may not be cacheable, and access using CPU load/store + * instructions may not be allowed. + * + * Usually this will be used to describe MMIO addresses, or other non-cach= eable + * register addresses. When DMA mapping this sort of address we call + * the operation Peer to Peer as a one device is DMA'ing to another device. + * For PCI devices the p2pdma APIs must be used to determine if DMA_ATTR_M= MIO + * is appropriate. + * + * For architectures that require cache flushing for DMA coherence + * DMA_ATTR_MMIO will not perform any cache flushing. The address + * provided must never be mapped cacheable into the CPU. + */ +#define DMA_ATTR_MMIO (1UL << 10) + /* * A dma_addr_t can hold any valid DMA or bus address for the platform. I= t can * be given to a device to use as a DMA source or target. It is specific = to a diff --git a/include/trace/events/dma.h b/include/trace/events/dma.h index d8ddc27b6a7c..ee90d6f1dcf3 100644 --- a/include/trace/events/dma.h +++ b/include/trace/events/dma.h @@ -31,7 +31,8 @@ TRACE_DEFINE_ENUM(DMA_NONE); { DMA_ATTR_FORCE_CONTIGUOUS, "FORCE_CONTIGUOUS" }, \ { DMA_ATTR_ALLOC_SINGLE_PAGES, "ALLOC_SINGLE_PAGES" }, \ { DMA_ATTR_NO_WARN, "NO_WARN" }, \ - { DMA_ATTR_PRIVILEGED, "PRIVILEGED" }) + { DMA_ATTR_PRIVILEGED, "PRIVILEGED" }, \ + { DMA_ATTR_MMIO, "MMIO" }) =20 DECLARE_EVENT_CLASS(dma_map, TP_PROTO(struct device *dev, phys_addr_t phys_addr, dma_addr_t dma_addr, diff --git a/rust/kernel/dma.rs b/rust/kernel/dma.rs index 2bc8ab51ec28..61d9eed7a786 100644 --- a/rust/kernel/dma.rs +++ b/rust/kernel/dma.rs @@ -242,6 +242,9 @@ pub mod attrs { /// Indicates that the buffer is fully accessible at an elevated privi= lege level (and /// ideally inaccessible or at least read-only at lesser-privileged le= vels). pub const DMA_ATTR_PRIVILEGED: Attrs =3D Attrs(bindings::DMA_ATTR_PRIV= ILEGED); + + /// Indicates that the buffer is MMIO memory. + pub const DMA_ATTR_MMIO: Attrs =3D Attrs(bindings::DMA_ATTR_MMIO); } =20 /// An abstraction of the `dma_alloc_coherent` API. --=20 2.50.1 From nobody Fri Oct 3 10:10:32 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 562F131A046; Tue, 2 Sep 2025 14:49:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1756824550; cv=none; b=F1f5Tf0gbSF154IvtIHt5GFzz+pMW9C4eDAqW11BbsMRHq5kuuktmZUsu5chmC5nnRukNTIAwxDzzvcorhLF60TqqjTErCqm395oXJWdE1Y58GqT8BpGo4ZLb113GDaNuiSZHqjjC+Qrr2P/Nh80qV/KT6hXsOzHu0TPTT6p/CQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1756824550; c=relaxed/simple; bh=mXPBKGOTAXMaA8YnIACrT4VeK6ColaZH6f/lOi0T5+M=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Lq1svBgYOHwouJVdwvazhrt3kHywsQFMbJ+EsVQ9ha8cKZ9hGPG5cIy55XyfU80z3BzSI7eXf+vepQ8eCDE+n2BTdgEW1j+BIljkrQKwxcQ6xcIGCv0LRbi5S2517mOemY+eBUYNol4lR5O0gW5VZdR+VFnkd8G7Q32KU3td65U= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=U0Qv8COD; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="U0Qv8COD" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2C5C6C4CEED; Tue, 2 Sep 2025 14:49:09 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1756824549; bh=mXPBKGOTAXMaA8YnIACrT4VeK6ColaZH6f/lOi0T5+M=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=U0Qv8COD9vJg/sj1gHJxX08tsa8mkqMwlswZW+JY7U0WZ8zpN+E0gqojH+zijJe4+ GuuBOfJJ7E92NxE/euEVPxir80asPMfJZGfrlBFEkkFiMxs1fGRxbBrARGWJDsvYEz 0KVNHATZmZLhqAH2lHu7WLDB7F7EQbI17lR4MqOQH/5s2HbhqrH3xM/s+mDuyDQQmx m02cD9zShxiSGxZ84PSZ2YclbBHgJpT4c2BpGr1yygJDBWMAsAnP+j3X0VDKrUyWPN 1KTnwLeTF/x2Orxw9fuAs97O/ssJyp2QOTAnt7oAnZkpfXEk2bpm2MqXqfbqd52vql XZaWqMxq2z8vg== From: Leon Romanovsky To: Marek Szyprowski Cc: Leon Romanovsky , Jason Gunthorpe , Abdiel Janulgue , Alexander Potapenko , Alex Gaynor , Andrew Morton , Christoph Hellwig , Danilo Krummrich , David Hildenbrand , iommu@lists.linux.dev, Jason Wang , Jens Axboe , Joerg Roedel , Jonathan Corbet , Juergen Gross , kasan-dev@googlegroups.com, Keith Busch , linux-block@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-nvme@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-trace-kernel@vger.kernel.org, Madhavan Srinivasan , Masami Hiramatsu , Michael Ellerman , "Michael S. Tsirkin" , Miguel Ojeda , Robin Murphy , rust-for-linux@vger.kernel.org, Sagi Grimberg , Stefano Stabellini , Steven Rostedt , virtualization@lists.linux.dev, Will Deacon , xen-devel@lists.xenproject.org Subject: [PATCH v5 02/16] iommu/dma: implement DMA_ATTR_MMIO for dma_iova_link(). Date: Tue, 2 Sep 2025 17:48:39 +0300 Message-ID: <5a279b1ce492ba8635eb3fa6bb9a22fd77366672.1756822782.git.leon@kernel.org> X-Mailer: git-send-email 2.50.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Leon Romanovsky This will replace the hacky use of DMA_ATTR_SKIP_CPU_SYNC to avoid touching the possibly non-KVA MMIO memory. Also correct the incorrect caching attribute for the IOMMU, MMIO memory should not be cachable inside the IOMMU mapping or it can possibly create system problems. Set IOMMU_MMIO for DMA_ATTR_MMIO. Reviewed-by: Jason Gunthorpe Signed-off-by: Leon Romanovsky --- drivers/iommu/dma-iommu.c | 18 ++++++++++++++---- 1 file changed, 14 insertions(+), 4 deletions(-) diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index ea2ef53bd4fe..e1185ba73e23 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -724,7 +724,12 @@ static int iommu_dma_init_domain(struct iommu_domain *= domain, struct device *dev static int dma_info_to_prot(enum dma_data_direction dir, bool coherent, unsigned long attrs) { - int prot =3D coherent ? IOMMU_CACHE : 0; + int prot; + + if (attrs & DMA_ATTR_MMIO) + prot =3D IOMMU_MMIO; + else + prot =3D coherent ? IOMMU_CACHE : 0; =20 if (attrs & DMA_ATTR_PRIVILEGED) prot |=3D IOMMU_PRIV; @@ -1838,12 +1843,13 @@ static int __dma_iova_link(struct device *dev, dma_= addr_t addr, unsigned long attrs) { bool coherent =3D dev_is_dma_coherent(dev); + int prot =3D dma_info_to_prot(dir, coherent, attrs); =20 - if (!coherent && !(attrs & DMA_ATTR_SKIP_CPU_SYNC)) + if (!coherent && !(attrs & (DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_MMIO))) arch_sync_dma_for_device(phys, size, dir); =20 return iommu_map_nosync(iommu_get_dma_domain(dev), addr, phys, size, - dma_info_to_prot(dir, coherent, attrs), GFP_ATOMIC); + prot, GFP_ATOMIC); } =20 static int iommu_dma_iova_bounce_and_link(struct device *dev, dma_addr_t a= ddr, @@ -1949,9 +1955,13 @@ int dma_iova_link(struct device *dev, struct dma_iov= a_state *state, return -EIO; =20 if (dev_use_swiotlb(dev, size, dir) && - iova_unaligned(iovad, phys, size)) + iova_unaligned(iovad, phys, size)) { + if (attrs & DMA_ATTR_MMIO) + return -EPERM; + return iommu_dma_iova_link_swiotlb(dev, state, phys, offset, size, dir, attrs); + } =20 return __dma_iova_link(dev, state->addr + offset - iova_start_pad, phys - iova_start_pad, --=20 2.50.1 From nobody Fri Oct 3 10:10:32 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=quarantine dis=none) header.from=kernel.org ARC-Seal: i=1; a=rsa-sha256; t=1756824588; cv=none; d=zohomail.com; s=zohoarc; b=UfupiUmLgrOCbig9f4dJFlLBatP8GU2zbuxYosZ0b2ARvJFoDBaszDV3yuyVoqS84+QK4hGWMep07s2s/lK9d//sEWLb+tC4NAZjn+y6GvApRye6l9xSssbC+3qxQd7AHEwjbwMA+wUKpUy94afYruLRfAnk5DHHMFdB5FCu5/s= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1756824588; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=c3g1lMNzmhoxt7IeBcgSIV29OAcSm7jebsMcZqg087g=; b=YGnm3tErWVr1RAz8nvGLDiF5duE8P6EQlvezUZBk+BUZDy6jM/XIq7fWib42hC9GRpIu2/KQ8yxFevyWp3KTO3f5QrNfcsDzNM4h88jpq4ScqvsmM3yT3+g9/B7rO/+lXmISBx4gAW+tgUVJaHPwYSfCr/8WYEgLvB6lJeJ3Z2s= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=quarantine dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1756824588697361.9609267922168; Tue, 2 Sep 2025 07:49:48 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.1106712.1457377 (Exim 4.92) (envelope-from ) id 1utSKB-0003ii-8v; Tue, 02 Sep 2025 14:49:31 +0000 Received: by outflank-mailman (output) from mailman id 1106712.1457377; Tue, 02 Sep 2025 14:49:31 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1utSKB-0003iX-4e; Tue, 02 Sep 2025 14:49:31 +0000 Received: by outflank-mailman (input) for mailman id 1106712; Tue, 02 Sep 2025 14:49:29 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1utSK9-0002MQ-Km for xen-devel@lists.xenproject.org; Tue, 02 Sep 2025 14:49:29 +0000 Received: from tor.source.kernel.org (tor.source.kernel.org [2600:3c04:e001:324:0:1991:8:25]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 0675ee45-880c-11f0-8adc-4578a1afcccb; Tue, 02 Sep 2025 16:49:28 +0200 (CEST) Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by tor.source.kernel.org (Postfix) with ESMTP id 7F0616021C; Tue, 2 Sep 2025 14:49:27 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 61C06C4CEED; Tue, 2 Sep 2025 14:49:26 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 0675ee45-880c-11f0-8adc-4578a1afcccb DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1756824567; bh=uH7MCAz6dsaq9sct7ho6H+zYeVokmgR17T1ZOZaSB5I=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=i1626xjVt6rHPzjustWTh89OBItu5qQ/mpVf7G/FH1ewKFbLrncF9oXFg1ZN+Aa6Z D/OqGkivSxO+Wp/kh3rml8ZUgNX1VybO2kl7i0RlTjM7W98E8gxeBLn9vDdbpI60Lx sX5XKNZ4y3/vbcAN8Mp1j7NkZJc6hy3Rc9S1pcheGm8yN0vIstlHnUbc6dvYARbx/P YcXonxterBEeyDHNgUd8PWvIyyAGyIM67qT/YT7M9W+8Rgi4I3FHqD/9KIyjajW2Jj aLwXwchnsh4pPdnWnxqKHsXxeTVFAYi1AAfai0l1Mv/GgMBiSP0uTjG/NGvHxE1yxM CcCWiInzqH5Jw== From: Leon Romanovsky To: Marek Szyprowski Cc: Leon Romanovsky , Jason Gunthorpe , Abdiel Janulgue , Alexander Potapenko , Alex Gaynor , Andrew Morton , Christoph Hellwig , Danilo Krummrich , David Hildenbrand , iommu@lists.linux.dev, Jason Wang , Jens Axboe , Joerg Roedel , Jonathan Corbet , Juergen Gross , kasan-dev@googlegroups.com, Keith Busch , linux-block@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-nvme@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-trace-kernel@vger.kernel.org, Madhavan Srinivasan , Masami Hiramatsu , Michael Ellerman , "Michael S. Tsirkin" , Miguel Ojeda , Robin Murphy , rust-for-linux@vger.kernel.org, Sagi Grimberg , Stefano Stabellini , Steven Rostedt , virtualization@lists.linux.dev, Will Deacon , xen-devel@lists.xenproject.org Subject: [PATCH v5 03/16] dma-debug: refactor to use physical addresses for page mapping Date: Tue, 2 Sep 2025 17:48:40 +0300 Message-ID: X-Mailer: git-send-email 2.50.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @kernel.org) X-ZM-MESSAGEID: 1756824589130116600 Content-Type: text/plain; charset="utf-8" From: Leon Romanovsky Convert the DMA debug infrastructure from page-based to physical address-ba= sed mapping as a preparation to rely on physical address for DMA mapping routin= es. The refactoring renames debug_dma_map_page() to debug_dma_map_phys() and changes its signature to accept a phys_addr_t parameter instead of struct p= age and offset. Similarly, debug_dma_unmap_page() becomes debug_dma_unmap_phys(= ). A new dma_debug_phy type is introduced to distinguish physical address mapp= ings from other debug entry types. All callers throughout the codebase are updat= ed to pass physical addresses directly, eliminating the need for page-to-physi= cal conversion in the debug layer. This refactoring eliminates the need to convert between page pointers and physical addresses in the debug layer, making the code more efficient and consistent with the DMA mapping API's physical address focus. Reviewed-by: Jason Gunthorpe Signed-off-by: Leon Romanovsky --- Documentation/core-api/dma-api.rst | 4 ++-- include/linux/page-flags.h | 1 + kernel/dma/debug.c | 38 +++++++++++++++++------------- kernel/dma/debug.h | 16 ++++++------- kernel/dma/mapping.c | 15 ++++++------ 5 files changed, 39 insertions(+), 35 deletions(-) diff --git a/Documentation/core-api/dma-api.rst b/Documentation/core-api/dm= a-api.rst index 3087bea715ed..ca75b3541679 100644 --- a/Documentation/core-api/dma-api.rst +++ b/Documentation/core-api/dma-api.rst @@ -761,7 +761,7 @@ example warning message may look like this:: [] find_busiest_group+0x207/0x8a0 [] _spin_lock_irqsave+0x1f/0x50 [] check_unmap+0x203/0x490 - [] debug_dma_unmap_page+0x49/0x50 + [] debug_dma_unmap_phys+0x49/0x50 [] nv_tx_done_optimized+0xc6/0x2c0 [] nv_nic_irq_optimized+0x73/0x2b0 [] handle_IRQ_event+0x34/0x70 @@ -855,7 +855,7 @@ that a driver may be leaking mappings. dma-debug interface debug_dma_mapping_error() to debug drivers that fail to check DMA mapping errors on addresses returned by dma_map_single() and dma_map_page() interfaces. This interface clears a flag set by -debug_dma_map_page() to indicate that dma_mapping_error() has been called = by +debug_dma_map_phys() to indicate that dma_mapping_error() has been called = by the driver. When driver does unmap, debug_dma_unmap() checks the flag and = if this flag is still set, prints warning message that includes call trace th= at leads up to the unmap. This interface can be called from dma_mapping_error= () diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h index 8d3fa3a91ce4..dfbc4ba86bba 100644 --- a/include/linux/page-flags.h +++ b/include/linux/page-flags.h @@ -614,6 +614,7 @@ FOLIO_FLAG(dropbehind, FOLIO_HEAD_PAGE) * available at this point. */ #define PageHighMem(__p) is_highmem_idx(page_zonenum(__p)) +#define PhysHighMem(__p) (PageHighMem(phys_to_page(__p))) #define folio_test_highmem(__f) is_highmem_idx(folio_zonenum(__f)) #else PAGEFLAG_FALSE(HighMem, highmem) diff --git a/kernel/dma/debug.c b/kernel/dma/debug.c index e43c6de2bce4..a0b135455119 100644 --- a/kernel/dma/debug.c +++ b/kernel/dma/debug.c @@ -39,6 +39,7 @@ enum { dma_debug_sg, dma_debug_coherent, dma_debug_resource, + dma_debug_phy, }; =20 enum map_err_types { @@ -141,6 +142,7 @@ static const char *type2name[] =3D { [dma_debug_sg] =3D "scatter-gather", [dma_debug_coherent] =3D "coherent", [dma_debug_resource] =3D "resource", + [dma_debug_phy] =3D "phy", }; =20 static const char *dir2name[] =3D { @@ -1051,17 +1053,16 @@ static void check_unmap(struct dma_debug_entry *ref) dma_entry_free(entry); } =20 -static void check_for_stack(struct device *dev, - struct page *page, size_t offset) +static void check_for_stack(struct device *dev, phys_addr_t phys) { void *addr; struct vm_struct *stack_vm_area =3D task_stack_vm_area(current); =20 if (!stack_vm_area) { /* Stack is direct-mapped. */ - if (PageHighMem(page)) + if (PhysHighMem(phys)) return; - addr =3D page_address(page) + offset; + addr =3D phys_to_virt(phys); if (object_is_on_stack(addr)) err_printk(dev, NULL, "device driver maps memory from stack [addr=3D%p]= \n", addr); } else { @@ -1069,10 +1070,12 @@ static void check_for_stack(struct device *dev, int i; =20 for (i =3D 0; i < stack_vm_area->nr_pages; i++) { - if (page !=3D stack_vm_area->pages[i]) + if (__phys_to_pfn(phys) !=3D + page_to_pfn(stack_vm_area->pages[i])) continue; =20 - addr =3D (u8 *)current->stack + i * PAGE_SIZE + offset; + addr =3D (u8 *)current->stack + i * PAGE_SIZE + + (phys % PAGE_SIZE); err_printk(dev, NULL, "device driver maps memory from stack [probable a= ddr=3D%p]\n", addr); break; } @@ -1201,9 +1204,8 @@ void debug_dma_map_single(struct device *dev, const v= oid *addr, } EXPORT_SYMBOL(debug_dma_map_single); =20 -void debug_dma_map_page(struct device *dev, struct page *page, size_t offs= et, - size_t size, int direction, dma_addr_t dma_addr, - unsigned long attrs) +void debug_dma_map_phys(struct device *dev, phys_addr_t phys, size_t size, + int direction, dma_addr_t dma_addr, unsigned long attrs) { struct dma_debug_entry *entry; =20 @@ -1218,19 +1220,21 @@ void debug_dma_map_page(struct device *dev, struct = page *page, size_t offset, return; =20 entry->dev =3D dev; - entry->type =3D dma_debug_single; - entry->paddr =3D page_to_phys(page) + offset; + entry->type =3D dma_debug_phy; + entry->paddr =3D phys; entry->dev_addr =3D dma_addr; entry->size =3D size; entry->direction =3D direction; entry->map_err_type =3D MAP_ERR_NOT_CHECKED; =20 - check_for_stack(dev, page, offset); + if (!(attrs & DMA_ATTR_MMIO)) { + struct page *page =3D phys_to_page(phys); + size_t offset =3D offset_in_page(page); =20 - if (!PageHighMem(page)) { - void *addr =3D page_address(page) + offset; + check_for_stack(dev, phys); =20 - check_for_illegal_area(dev, addr, size); + if (!PhysHighMem(phys)) + check_for_illegal_area(dev, phys_to_virt(phys), size); } =20 add_dma_entry(entry, attrs); @@ -1274,11 +1278,11 @@ void debug_dma_mapping_error(struct device *dev, dm= a_addr_t dma_addr) } EXPORT_SYMBOL(debug_dma_mapping_error); =20 -void debug_dma_unmap_page(struct device *dev, dma_addr_t dma_addr, +void debug_dma_unmap_phys(struct device *dev, dma_addr_t dma_addr, size_t size, int direction) { struct dma_debug_entry ref =3D { - .type =3D dma_debug_single, + .type =3D dma_debug_phy, .dev =3D dev, .dev_addr =3D dma_addr, .size =3D size, diff --git a/kernel/dma/debug.h b/kernel/dma/debug.h index f525197d3cae..76adb42bffd5 100644 --- a/kernel/dma/debug.h +++ b/kernel/dma/debug.h @@ -9,12 +9,11 @@ #define _KERNEL_DMA_DEBUG_H =20 #ifdef CONFIG_DMA_API_DEBUG -extern void debug_dma_map_page(struct device *dev, struct page *page, - size_t offset, size_t size, - int direction, dma_addr_t dma_addr, +extern void debug_dma_map_phys(struct device *dev, phys_addr_t phys, + size_t size, int direction, dma_addr_t dma_addr, unsigned long attrs); =20 -extern void debug_dma_unmap_page(struct device *dev, dma_addr_t addr, +extern void debug_dma_unmap_phys(struct device *dev, dma_addr_t addr, size_t size, int direction); =20 extern void debug_dma_map_sg(struct device *dev, struct scatterlist *sg, @@ -55,14 +54,13 @@ extern void debug_dma_sync_sg_for_device(struct device = *dev, struct scatterlist *sg, int nelems, int direction); #else /* CONFIG_DMA_API_DEBUG */ -static inline void debug_dma_map_page(struct device *dev, struct page *pag= e, - size_t offset, size_t size, - int direction, dma_addr_t dma_addr, - unsigned long attrs) +static inline void debug_dma_map_phys(struct device *dev, phys_addr_t phys, + size_t size, int direction, + dma_addr_t dma_addr, unsigned long attrs) { } =20 -static inline void debug_dma_unmap_page(struct device *dev, dma_addr_t add= r, +static inline void debug_dma_unmap_phys(struct device *dev, dma_addr_t add= r, size_t size, int direction) { } diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c index 107e4a4d251d..4c1dfbabb8ae 100644 --- a/kernel/dma/mapping.c +++ b/kernel/dma/mapping.c @@ -157,6 +157,7 @@ dma_addr_t dma_map_page_attrs(struct device *dev, struc= t page *page, unsigned long attrs) { const struct dma_map_ops *ops =3D get_dma_ops(dev); + phys_addr_t phys =3D page_to_phys(page) + offset; dma_addr_t addr; =20 BUG_ON(!valid_dma_direction(dir)); @@ -165,16 +166,15 @@ dma_addr_t dma_map_page_attrs(struct device *dev, str= uct page *page, return DMA_MAPPING_ERROR; =20 if (dma_map_direct(dev, ops) || - arch_dma_map_page_direct(dev, page_to_phys(page) + offset + size)) + arch_dma_map_page_direct(dev, phys + size)) addr =3D dma_direct_map_page(dev, page, offset, size, dir, attrs); else if (use_dma_iommu(dev)) addr =3D iommu_dma_map_page(dev, page, offset, size, dir, attrs); else addr =3D ops->map_page(dev, page, offset, size, dir, attrs); kmsan_handle_dma(page, offset, size, dir); - trace_dma_map_page(dev, page_to_phys(page) + offset, addr, size, dir, - attrs); - debug_dma_map_page(dev, page, offset, size, dir, addr, attrs); + trace_dma_map_page(dev, phys, addr, size, dir, attrs); + debug_dma_map_phys(dev, phys, size, dir, addr, attrs); =20 return addr; } @@ -194,7 +194,7 @@ void dma_unmap_page_attrs(struct device *dev, dma_addr_= t addr, size_t size, else ops->unmap_page(dev, addr, size, dir, attrs); trace_dma_unmap_page(dev, addr, size, dir, attrs); - debug_dma_unmap_page(dev, addr, size, dir); + debug_dma_unmap_phys(dev, addr, size, dir); } EXPORT_SYMBOL(dma_unmap_page_attrs); =20 @@ -712,7 +712,8 @@ struct page *dma_alloc_pages(struct device *dev, size_t= size, if (page) { trace_dma_alloc_pages(dev, page_to_virt(page), *dma_handle, size, dir, gfp, 0); - debug_dma_map_page(dev, page, 0, size, dir, *dma_handle, 0); + debug_dma_map_phys(dev, page_to_phys(page), size, dir, + *dma_handle, 0); } else { trace_dma_alloc_pages(dev, NULL, 0, size, dir, gfp, 0); } @@ -738,7 +739,7 @@ void dma_free_pages(struct device *dev, size_t size, st= ruct page *page, dma_addr_t dma_handle, enum dma_data_direction dir) { trace_dma_free_pages(dev, page_to_virt(page), dma_handle, size, dir, 0); - debug_dma_unmap_page(dev, dma_handle, size, dir); + debug_dma_unmap_phys(dev, dma_handle, size, dir); __dma_free_pages(dev, size, page, dma_handle, dir); } EXPORT_SYMBOL_GPL(dma_free_pages); --=20 2.50.1 From nobody Fri Oct 3 10:10:32 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=quarantine dis=none) header.from=kernel.org ARC-Seal: i=1; a=rsa-sha256; t=1756824579; cv=none; d=zohomail.com; s=zohoarc; b=m+ZJgM3urDfwLhp9Ig3cvAfE1SMJvXgwaBvFBJpoaqVYUlVg5F40LWaVArS2P1Uhwb5odwUogeJjdNQeUo+iQcqJkn9sIje/hSis/YEsbFuMTjYkJcR/a0uzJTAo5GSLiF+2VKG8rP9WlyOICP+pqo8KjvK2F8cbxfK/rY/aBm8= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1756824579; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=7/hlUPkO7h8DBhJNx9gpER7mP0DqTJkgE9FjVOPyX1w=; b=bswKu9GC6Nkta6zkY12CwNBhA12/pjHFq54/Uokng8fjclGfxEXIWgBbBJE5hWcloI2G5MS8X3rwsEmKUqm2WVY1QGXDzXDgRFYxCyaHbrU2ZY4J7oWMWSeDljB4sy64l526DBsS1WvvgLRmFdLjUajTY1oy4duAjWl7WScnT4A= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=quarantine dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 175682457963935.941376501774926; Tue, 2 Sep 2025 07:49:39 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.1106704.1457356 (Exim 4.92) (envelope-from ) id 1utSK1-0002vg-Ij; Tue, 02 Sep 2025 14:49:21 +0000 Received: by outflank-mailman (output) from mailman id 1106704.1457356; Tue, 02 Sep 2025 14:49:21 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1utSK1-0002vZ-Fr; Tue, 02 Sep 2025 14:49:21 +0000 Received: by outflank-mailman (input) for mailman id 1106704; Tue, 02 Sep 2025 14:49:20 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1utSK0-0002MQ-70 for xen-devel@lists.xenproject.org; Tue, 02 Sep 2025 14:49:20 +0000 Received: from tor.source.kernel.org (tor.source.kernel.org [172.105.4.254]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 01152c50-880c-11f0-8adc-4578a1afcccb; Tue, 02 Sep 2025 16:49:19 +0200 (CEST) Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by tor.source.kernel.org (Postfix) with ESMTP id 7EA356021C; Tue, 2 Sep 2025 14:49:18 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3B2D4C4CEED; Tue, 2 Sep 2025 14:49:17 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 01152c50-880c-11f0-8adc-4578a1afcccb DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1756824558; bh=ZQFwkiDPp1GyvHCZ7R7ij52YlSaCwxtclFyfGHPImis=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=jZmeCwVFRuH1q78msLckvRK79MBxGIRsQSWl7GBnGjpSqFOv2UDpkk6433cUWOsEL l9FnZk3/0IIfwIr220SmWYn2k3PyNggQq6bLFnvgo1KPA8d0QkcJIRT0vceLl2NBv3 AyLz46HnllepaRbuN6Ht7Y/oPR7+rcoH7rVDA85AS1gdZ8v7IiwFHbwzMgE1Q2HXAx ZKVBRm1t0pVDRP6Kla06+MrDCyH7a11ZlI1r1bp8YNmS2rKCfQW03E6fHoFyIWG1eL Gf1sYErqR105oSaj09oPpOBM+XoEwrPIL4YnSSFSWBAOhlLxm/x9zMzxsifJI16EZE Wv26+f+T2tyxA== From: Leon Romanovsky To: Marek Szyprowski Cc: Leon Romanovsky , Jason Gunthorpe , Abdiel Janulgue , Alexander Potapenko , Alex Gaynor , Andrew Morton , Christoph Hellwig , Danilo Krummrich , David Hildenbrand , iommu@lists.linux.dev, Jason Wang , Jens Axboe , Joerg Roedel , Jonathan Corbet , Juergen Gross , kasan-dev@googlegroups.com, Keith Busch , linux-block@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-nvme@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-trace-kernel@vger.kernel.org, Madhavan Srinivasan , Masami Hiramatsu , Michael Ellerman , "Michael S. Tsirkin" , Miguel Ojeda , Robin Murphy , rust-for-linux@vger.kernel.org, Sagi Grimberg , Stefano Stabellini , Steven Rostedt , virtualization@lists.linux.dev, Will Deacon , xen-devel@lists.xenproject.org Subject: [PATCH v5 04/16] dma-mapping: rename trace_dma_*map_page to trace_dma_*map_phys Date: Tue, 2 Sep 2025 17:48:41 +0300 Message-ID: <7b4656d5f6392486f28f71ad600a95e6690e2f41.1756822782.git.leon@kernel.org> X-Mailer: git-send-email 2.50.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @kernel.org) X-ZM-MESSAGEID: 1756824581087116600 Content-Type: text/plain; charset="utf-8" From: Leon Romanovsky As a preparation for following map_page -> map_phys API conversion, let's rename trace_dma_*map_page() to be trace_dma_*map_phys(). Reviewed-by: Jason Gunthorpe Signed-off-by: Leon Romanovsky --- include/trace/events/dma.h | 4 ++-- kernel/dma/mapping.c | 4 ++-- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/include/trace/events/dma.h b/include/trace/events/dma.h index ee90d6f1dcf3..84416c7d6bfa 100644 --- a/include/trace/events/dma.h +++ b/include/trace/events/dma.h @@ -72,7 +72,7 @@ DEFINE_EVENT(dma_map, name, \ size_t size, enum dma_data_direction dir, unsigned long attrs), \ TP_ARGS(dev, phys_addr, dma_addr, size, dir, attrs)) =20 -DEFINE_MAP_EVENT(dma_map_page); +DEFINE_MAP_EVENT(dma_map_phys); DEFINE_MAP_EVENT(dma_map_resource); =20 DECLARE_EVENT_CLASS(dma_unmap, @@ -110,7 +110,7 @@ DEFINE_EVENT(dma_unmap, name, \ enum dma_data_direction dir, unsigned long attrs), \ TP_ARGS(dev, addr, size, dir, attrs)) =20 -DEFINE_UNMAP_EVENT(dma_unmap_page); +DEFINE_UNMAP_EVENT(dma_unmap_phys); DEFINE_UNMAP_EVENT(dma_unmap_resource); =20 DECLARE_EVENT_CLASS(dma_alloc_class, diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c index 4c1dfbabb8ae..fe1f0da6dc50 100644 --- a/kernel/dma/mapping.c +++ b/kernel/dma/mapping.c @@ -173,7 +173,7 @@ dma_addr_t dma_map_page_attrs(struct device *dev, struc= t page *page, else addr =3D ops->map_page(dev, page, offset, size, dir, attrs); kmsan_handle_dma(page, offset, size, dir); - trace_dma_map_page(dev, phys, addr, size, dir, attrs); + trace_dma_map_phys(dev, phys, addr, size, dir, attrs); debug_dma_map_phys(dev, phys, size, dir, addr, attrs); =20 return addr; @@ -193,7 +193,7 @@ void dma_unmap_page_attrs(struct device *dev, dma_addr_= t addr, size_t size, iommu_dma_unmap_page(dev, addr, size, dir, attrs); else ops->unmap_page(dev, addr, size, dir, attrs); - trace_dma_unmap_page(dev, addr, size, dir, attrs); + trace_dma_unmap_phys(dev, addr, size, dir, attrs); debug_dma_unmap_phys(dev, addr, size, dir); } EXPORT_SYMBOL(dma_unmap_page_attrs); --=20 2.50.1 From nobody Fri Oct 3 10:10:32 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=quarantine dis=none) header.from=kernel.org ARC-Seal: i=1; a=rsa-sha256; t=1756824583; cv=none; d=zohomail.com; s=zohoarc; b=h0r3hQWBSPbvlO9aPWnXWLReQLyzLMT3b86aPFMddwxxUj+cuuORiiseitbj3x2kJwLVJ8p+zJzGpeLfwVslIo52nwethrLCPXAwxiyVmHnS3HhV1sZxOeigKVm/3W4gjVXEOyG1EdOIesGCaXuhhtimXE7huKImSzbcerEQOvY= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1756824583; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=0r28goZFSxu4+TQiPbsxYpHpPmrnbh7KwEU70GjOWDk=; b=J0d4YvPH/1BwZbyO1/8ZIPCzvS6ffJgp8RBTiwpaRGr1gQY0A9E0B04/tq80C8hDnt1IUVE/Qh2Hc4MURmE4Y8gOXMvOCRI+Pmf9329PgCfiOUhhxO1c1n2F5dIR3B/gh7MB6rS2DGs8yOy0ysS1J78emJhqR+z+gKsr9RTbnec= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=quarantine dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1756824583906631.9221326572098; Tue, 2 Sep 2025 07:49:43 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.1106706.1457368 (Exim 4.92) (envelope-from ) id 1utSK6-0003KW-VM; Tue, 02 Sep 2025 14:49:26 +0000 Received: by outflank-mailman (output) from mailman id 1106706.1457368; Tue, 02 Sep 2025 14:49:26 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1utSK6-0003K2-No; Tue, 02 Sep 2025 14:49:26 +0000 Received: by outflank-mailman (input) for mailman id 1106706; Tue, 02 Sep 2025 14:49:25 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1utSK5-0002MQ-Hp for xen-devel@lists.xenproject.org; Tue, 02 Sep 2025 14:49:25 +0000 Received: from sea.source.kernel.org (sea.source.kernel.org [2600:3c0a:e001:78e:0:1991:8:25]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 0393c28b-880c-11f0-8adc-4578a1afcccb; Tue, 02 Sep 2025 16:49:24 +0200 (CEST) Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sea.source.kernel.org (Postfix) with ESMTP id 997C144B04; Tue, 2 Sep 2025 14:49:22 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 63F72C4CEED; Tue, 2 Sep 2025 14:49:21 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 0393c28b-880c-11f0-8adc-4578a1afcccb DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1756824562; bh=O6uMPMfn6w3e8A1zwYyTqdbKBNXEs5PSAm4+nCx8TRc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=myJAVWf/5pP27hEdW1OJXdbXUSkcP79/L41p91PWbmM6tvhAAcY6WU3zAKSpQUomc QH4aB6lvbbnIYw7q8DRSz00YjYtr7WYWtgm6T5RhfgesFjn7nG+KObPUcI6F2HA1Qe Y6d+0zlZE+phzYvWQX50DgVolFKX74Cyn9ITd/6PeRHDBMZF1q+BC/2sjXYDIX8w7t EVN4ghuKO3fVzEnRThIGn3kbv/JwxhkY6m6ltUwZQmazTDeuiTfOD6OQCFhjVu+HL1 Xj0ZRiNUw+h9pJF3hE1a+8iqZ02XyqiZEyRBb6QcXs/uIx6mg3fOHMcxuzeC43FpQ6 9gnsufUbJk7Lw== From: Leon Romanovsky To: Marek Szyprowski Cc: Leon Romanovsky , Jason Gunthorpe , Abdiel Janulgue , Alexander Potapenko , Alex Gaynor , Andrew Morton , Christoph Hellwig , Danilo Krummrich , David Hildenbrand , iommu@lists.linux.dev, Jason Wang , Jens Axboe , Joerg Roedel , Jonathan Corbet , Juergen Gross , kasan-dev@googlegroups.com, Keith Busch , linux-block@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-nvme@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-trace-kernel@vger.kernel.org, Madhavan Srinivasan , Masami Hiramatsu , Michael Ellerman , "Michael S. Tsirkin" , Miguel Ojeda , Robin Murphy , rust-for-linux@vger.kernel.org, Sagi Grimberg , Stefano Stabellini , Steven Rostedt , virtualization@lists.linux.dev, Will Deacon , xen-devel@lists.xenproject.org Subject: [PATCH v5 05/16] iommu/dma: rename iommu_dma_*map_page to iommu_dma_*map_phys Date: Tue, 2 Sep 2025 17:48:42 +0300 Message-ID: <9b7eebd170d68db9854056e24b94ec1fdad73d6f.1756822782.git.leon@kernel.org> X-Mailer: git-send-email 2.50.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @kernel.org) X-ZM-MESSAGEID: 1756824586731124100 Content-Type: text/plain; charset="utf-8" From: Leon Romanovsky Rename the IOMMU DMA mapping functions to better reflect their actual calling convention. The functions iommu_dma_map_page() and iommu_dma_unmap_page() are renamed to iommu_dma_map_phys() and iommu_dma_unmap_phys() respectively, as they already operate on physical addresses rather than page structures. The calling convention changes from accepting (struct page *page, unsigned long offset) to (phys_addr_t phys), which eliminates the need for page-to-physical address conversion within the functions. This renaming prepares for the broader DMA API conversion from page-based to physical address-based mapping throughout the kernel. All callers are updated to pass physical addresses directly, including dma_map_page_attrs(), scatterlist mapping functions, and DMA page allocation helpers. The change simplifies the code by removing the page_to_phys() + offset calculation that was previously done inside the IOMMU functions. Reviewed-by: Jason Gunthorpe Signed-off-by: Leon Romanovsky --- drivers/iommu/dma-iommu.c | 14 ++++++-------- include/linux/iommu-dma.h | 7 +++---- kernel/dma/mapping.c | 4 ++-- kernel/dma/ops_helpers.c | 6 +++--- 4 files changed, 14 insertions(+), 17 deletions(-) diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index e1185ba73e23..aea119f32f96 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -1195,11 +1195,9 @@ static inline size_t iova_unaligned(struct iova_doma= in *iovad, phys_addr_t phys, return iova_offset(iovad, phys | size); } =20 -dma_addr_t iommu_dma_map_page(struct device *dev, struct page *page, - unsigned long offset, size_t size, enum dma_data_direction dir, - unsigned long attrs) +dma_addr_t iommu_dma_map_phys(struct device *dev, phys_addr_t phys, size_t= size, + enum dma_data_direction dir, unsigned long attrs) { - phys_addr_t phys =3D page_to_phys(page) + offset; bool coherent =3D dev_is_dma_coherent(dev); int prot =3D dma_info_to_prot(dir, coherent, attrs); struct iommu_domain *domain =3D iommu_get_dma_domain(dev); @@ -1227,7 +1225,7 @@ dma_addr_t iommu_dma_map_page(struct device *dev, str= uct page *page, return iova; } =20 -void iommu_dma_unmap_page(struct device *dev, dma_addr_t dma_handle, +void iommu_dma_unmap_phys(struct device *dev, dma_addr_t dma_handle, size_t size, enum dma_data_direction dir, unsigned long attrs) { struct iommu_domain *domain =3D iommu_get_dma_domain(dev); @@ -1346,7 +1344,7 @@ static void iommu_dma_unmap_sg_swiotlb(struct device = *dev, struct scatterlist *s int i; =20 for_each_sg(sg, s, nents, i) - iommu_dma_unmap_page(dev, sg_dma_address(s), + iommu_dma_unmap_phys(dev, sg_dma_address(s), sg_dma_len(s), dir, attrs); } =20 @@ -1359,8 +1357,8 @@ static int iommu_dma_map_sg_swiotlb(struct device *de= v, struct scatterlist *sg, sg_dma_mark_swiotlb(sg); =20 for_each_sg(sg, s, nents, i) { - sg_dma_address(s) =3D iommu_dma_map_page(dev, sg_page(s), - s->offset, s->length, dir, attrs); + sg_dma_address(s) =3D iommu_dma_map_phys(dev, sg_phys(s), + s->length, dir, attrs); if (sg_dma_address(s) =3D=3D DMA_MAPPING_ERROR) goto out_unmap; sg_dma_len(s) =3D s->length; diff --git a/include/linux/iommu-dma.h b/include/linux/iommu-dma.h index 508beaa44c39..485bdffed988 100644 --- a/include/linux/iommu-dma.h +++ b/include/linux/iommu-dma.h @@ -21,10 +21,9 @@ static inline bool use_dma_iommu(struct device *dev) } #endif /* CONFIG_IOMMU_DMA */ =20 -dma_addr_t iommu_dma_map_page(struct device *dev, struct page *page, - unsigned long offset, size_t size, enum dma_data_direction dir, - unsigned long attrs); -void iommu_dma_unmap_page(struct device *dev, dma_addr_t dma_handle, +dma_addr_t iommu_dma_map_phys(struct device *dev, phys_addr_t phys, size_t= size, + enum dma_data_direction dir, unsigned long attrs); +void iommu_dma_unmap_phys(struct device *dev, dma_addr_t dma_handle, size_t size, enum dma_data_direction dir, unsigned long attrs); int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg, int nents, enum dma_data_direction dir, unsigned long attrs); diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c index fe1f0da6dc50..58482536db9b 100644 --- a/kernel/dma/mapping.c +++ b/kernel/dma/mapping.c @@ -169,7 +169,7 @@ dma_addr_t dma_map_page_attrs(struct device *dev, struc= t page *page, arch_dma_map_page_direct(dev, phys + size)) addr =3D dma_direct_map_page(dev, page, offset, size, dir, attrs); else if (use_dma_iommu(dev)) - addr =3D iommu_dma_map_page(dev, page, offset, size, dir, attrs); + addr =3D iommu_dma_map_phys(dev, phys, size, dir, attrs); else addr =3D ops->map_page(dev, page, offset, size, dir, attrs); kmsan_handle_dma(page, offset, size, dir); @@ -190,7 +190,7 @@ void dma_unmap_page_attrs(struct device *dev, dma_addr_= t addr, size_t size, arch_dma_unmap_page_direct(dev, addr + size)) dma_direct_unmap_page(dev, addr, size, dir, attrs); else if (use_dma_iommu(dev)) - iommu_dma_unmap_page(dev, addr, size, dir, attrs); + iommu_dma_unmap_phys(dev, addr, size, dir, attrs); else ops->unmap_page(dev, addr, size, dir, attrs); trace_dma_unmap_phys(dev, addr, size, dir, attrs); diff --git a/kernel/dma/ops_helpers.c b/kernel/dma/ops_helpers.c index 9afd569eadb9..6f9d604d9d40 100644 --- a/kernel/dma/ops_helpers.c +++ b/kernel/dma/ops_helpers.c @@ -72,8 +72,8 @@ struct page *dma_common_alloc_pages(struct device *dev, s= ize_t size, return NULL; =20 if (use_dma_iommu(dev)) - *dma_handle =3D iommu_dma_map_page(dev, page, 0, size, dir, - DMA_ATTR_SKIP_CPU_SYNC); + *dma_handle =3D iommu_dma_map_phys(dev, page_to_phys(page), size, + dir, DMA_ATTR_SKIP_CPU_SYNC); else *dma_handle =3D ops->map_page(dev, page, 0, size, dir, DMA_ATTR_SKIP_CPU_SYNC); @@ -92,7 +92,7 @@ void dma_common_free_pages(struct device *dev, size_t siz= e, struct page *page, const struct dma_map_ops *ops =3D get_dma_ops(dev); =20 if (use_dma_iommu(dev)) - iommu_dma_unmap_page(dev, dma_handle, size, dir, + iommu_dma_unmap_phys(dev, dma_handle, size, dir, DMA_ATTR_SKIP_CPU_SYNC); else if (ops->unmap_page) ops->unmap_page(dev, dma_handle, size, dir, --=20 2.50.1 From nobody Fri Oct 3 10:10:32 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B8CCC31B102; Tue, 2 Sep 2025 14:49:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1756824584; cv=none; b=XSDwb+5xdJImaVXq06HnID690UlhKup6HUUHUIAPQVeKv3ZCyfRp6iQO2SimX1LhQwbAD5CgYbJ+aCGIVjvgx6Hn2GZslTPMDi5RiiqasWk3JO02StvmpmQ8Z2RCIcxqohO6r0IUv3iWBtmveGObPuj5wy4CDSDmZ5/rxnH2RsQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1756824584; c=relaxed/simple; bh=2/thOeJ4NuvxSVVC3hxw/9bIX2PIBvboPgYtgi2/VE0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=W2zxUOvsl/rC19N1Wn03I9n43SjrNSRynnk/xsMTH+uTaLZVVXpEhJT9nxNutNHXVAv70sXL3PXTGmfKc8nCDpLokMxgGq23u0N1UittT60gsc0ZoROwahaCQKyj4JJT5t079RwiQL/OdepiAVBeVTRQkDoOgbjBOfTHG1gopGg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=VKECAb+7; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="VKECAb+7" Received: by smtp.kernel.org (Postfix) with ESMTPSA id A66F8C4CEED; Tue, 2 Sep 2025 14:49:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1756824584; bh=2/thOeJ4NuvxSVVC3hxw/9bIX2PIBvboPgYtgi2/VE0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=VKECAb+7Sse52mYJL41SXOrXDFGCRiIF9tuTww+7tx8TF5Ejmreeh3HY00rHaazQn aOTh2UXi0X7gB6XKiPuY2KJcOfZeTrAYxYVSBqbTN3F5Fl7eG0quZrJ/k42DqVVCsr GwhvRXbAMz4CU6yoxiw2WzZdasqegGQPNyQKGXTE5/3T3HPR4sL/jxhRLhqbGHtIqp Olwm6UL/KtvfA0rjsrKrcjQo7JfNFlEx+nnbC0Q+EsvdfGHJP1R+qDhsLnUhQ/x3ua ilF/FQzSLWqW8sNqDmIhrghX9qmh9Vix0gO6Gy0buRmc6/knYwd5Ib7hdXL+VxUnMe /kIUohye23SrQ== From: Leon Romanovsky To: Marek Szyprowski Cc: Leon Romanovsky , Jason Gunthorpe , Abdiel Janulgue , Alexander Potapenko , Alex Gaynor , Andrew Morton , Christoph Hellwig , Danilo Krummrich , David Hildenbrand , iommu@lists.linux.dev, Jason Wang , Jens Axboe , Joerg Roedel , Jonathan Corbet , Juergen Gross , kasan-dev@googlegroups.com, Keith Busch , linux-block@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-nvme@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-trace-kernel@vger.kernel.org, Madhavan Srinivasan , Masami Hiramatsu , Michael Ellerman , "Michael S. Tsirkin" , Miguel Ojeda , Robin Murphy , rust-for-linux@vger.kernel.org, Sagi Grimberg , Stefano Stabellini , Steven Rostedt , virtualization@lists.linux.dev, Will Deacon , xen-devel@lists.xenproject.org Subject: [PATCH v5 06/16] iommu/dma: implement DMA_ATTR_MMIO for iommu_dma_(un)map_phys() Date: Tue, 2 Sep 2025 17:48:43 +0300 Message-ID: <615b270dc8cd285c1b05cf3b9d3a969487049a5f.1756822782.git.leon@kernel.org> X-Mailer: git-send-email 2.50.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Leon Romanovsky Make iommu_dma_map_phys() and iommu_dma_unmap_phys() respect DMA_ATTR_MMIO. DMA_ATTR_MMIO makes the functions behave the same as iommu_dma_(un)map_resource(): - No swiotlb is possible - No cache flushing is done (ATTR_MMIO should not be cached memory) - prot for iommu_map() has IOMMU_MMIO not IOMMU_CACHE This is preparation for replacing iommu_dma_map_resource() callers with iommu_dma_map_phys(DMA_ATTR_MMIO) and removing iommu_dma_(un)map_resource(). Reviewed-by: Jason Gunthorpe Signed-off-by: Leon Romanovsky --- drivers/iommu/dma-iommu.c | 15 +++++++++++---- 1 file changed, 11 insertions(+), 4 deletions(-) diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index aea119f32f96..6804aaf034a1 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -1211,16 +1211,19 @@ dma_addr_t iommu_dma_map_phys(struct device *dev, p= hys_addr_t phys, size_t size, */ if (dev_use_swiotlb(dev, size, dir) && iova_unaligned(iovad, phys, size)) { + if (attrs & DMA_ATTR_MMIO) + return DMA_MAPPING_ERROR; + phys =3D iommu_dma_map_swiotlb(dev, phys, size, dir, attrs); if (phys =3D=3D (phys_addr_t)DMA_MAPPING_ERROR) return DMA_MAPPING_ERROR; } =20 - if (!coherent && !(attrs & DMA_ATTR_SKIP_CPU_SYNC)) + if (!coherent && !(attrs & (DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_MMIO))) arch_sync_dma_for_device(phys, size, dir); =20 iova =3D __iommu_dma_map(dev, phys, size, prot, dma_mask); - if (iova =3D=3D DMA_MAPPING_ERROR) + if (iova =3D=3D DMA_MAPPING_ERROR && !(attrs & DMA_ATTR_MMIO)) swiotlb_tbl_unmap_single(dev, phys, size, dir, attrs); return iova; } @@ -1228,10 +1231,14 @@ dma_addr_t iommu_dma_map_phys(struct device *dev, p= hys_addr_t phys, size_t size, void iommu_dma_unmap_phys(struct device *dev, dma_addr_t dma_handle, size_t size, enum dma_data_direction dir, unsigned long attrs) { - struct iommu_domain *domain =3D iommu_get_dma_domain(dev); phys_addr_t phys; =20 - phys =3D iommu_iova_to_phys(domain, dma_handle); + if (attrs & DMA_ATTR_MMIO) { + __iommu_dma_unmap(dev, dma_handle, size); + return; + } + + phys =3D iommu_iova_to_phys(iommu_get_dma_domain(dev), dma_handle); if (WARN_ON(!phys)) return; =20 --=20 2.50.1 From nobody Fri Oct 3 10:10:32 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=quarantine dis=none) header.from=kernel.org ARC-Seal: i=1; a=rsa-sha256; t=1756824593; cv=none; d=zohomail.com; s=zohoarc; b=CcWaYy9YEOfG4wnZ+R8wxM8YszsrNTpyXPrYlooIoe/VwAOaJmeeGNpgr/kLBUhKOa+llHx9869pbHLB69mY+rwTvFZno1wr5hdJo6V2cnmWv1VQWb+5aCQ+ovBRaSpFjTPIIyF+z7NwueKdTff1WtNa4SoXHEdIQbCKXIu7MAc= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1756824593; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=FQapdzLEWvib4wee/iBeh13SXG3LZXCh6iREE8hdeBg=; b=PPiXVw90VofEslp0Rgm9ERTItBHOSo2EUZWcTw2ERJQY1j1OA/La8WJ4lSvyh0eCYencnsjsY2dcaTnlbque6CiMZDRANv3zQxaoZYR3JsfRJanJ6Ndb9rfm3ZD1ypeqTkt62JUfs8fICRb9mHFzc3szzAHn27rROYuK3XbuUco= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=quarantine dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 17568245939161006.0657776515312; Tue, 2 Sep 2025 07:49:53 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.1106721.1457387 (Exim 4.92) (envelope-from ) id 1utSKG-0004If-GI; Tue, 02 Sep 2025 14:49:36 +0000 Received: by outflank-mailman (output) from mailman id 1106721.1457387; Tue, 02 Sep 2025 14:49:36 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1utSKG-0004IQ-By; Tue, 02 Sep 2025 14:49:36 +0000 Received: by outflank-mailman (input) for mailman id 1106721; Tue, 02 Sep 2025 14:49:35 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1utSKF-00028O-4e for xen-devel@lists.xenproject.org; Tue, 02 Sep 2025 14:49:35 +0000 Received: from sea.source.kernel.org (sea.source.kernel.org [172.234.252.31]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 092628b9-880c-11f0-8dd7-1b34d833f44b; Tue, 02 Sep 2025 16:49:33 +0200 (CEST) Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sea.source.kernel.org (Postfix) with ESMTP id 0C51C41994; Tue, 2 Sep 2025 14:49:32 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9A196C4CEED; Tue, 2 Sep 2025 14:49:30 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 092628b9-880c-11f0-8dd7-1b34d833f44b DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1756824571; bh=H1qNdYxU1ovtF/Kf8Q7B45B32+Se6GftrqzmrTG8Akc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=agxG7jxNimedr+ehl5VL/FJJfWkaMHUW9N3eCg0O1tgYvMT9NgSaMNfnGfPrLahCh MRH1C8p1el2Pgvm6wVDXipR3AG8Cd5bz4MqKL18T/Dx8bMePVeLJKWgKIXU/CR7L+H kCCZ3f3LPrlMrDe3F1UUx75y9f4NLS4LEtQlywCdR4bgQwwX+S6jvPJagcFtGMPE2X 18Vq5j7QMhSuaasP6mIygRbKdmAJPXzruNfGaofMHxh1hw3iiwoIeHGDeEcyX/bdFo gt5fyWKJRg+d1ClXnnM4W0BdfqF95sxrSjrW7Ew2g0fYmKCUI+NQI0yGYZwDj9TiCH zcQjLoumK7NAg== From: Leon Romanovsky To: Marek Szyprowski Cc: Leon Romanovsky , Jason Gunthorpe , Abdiel Janulgue , Alexander Potapenko , Alex Gaynor , Andrew Morton , Christoph Hellwig , Danilo Krummrich , David Hildenbrand , iommu@lists.linux.dev, Jason Wang , Jens Axboe , Joerg Roedel , Jonathan Corbet , Juergen Gross , kasan-dev@googlegroups.com, Keith Busch , linux-block@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-nvme@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-trace-kernel@vger.kernel.org, Madhavan Srinivasan , Masami Hiramatsu , Michael Ellerman , "Michael S. Tsirkin" , Miguel Ojeda , Robin Murphy , rust-for-linux@vger.kernel.org, Sagi Grimberg , Stefano Stabellini , Steven Rostedt , virtualization@lists.linux.dev, Will Deacon , xen-devel@lists.xenproject.org Subject: [PATCH v5 07/16] dma-mapping: convert dma_direct_*map_page to be phys_addr_t based Date: Tue, 2 Sep 2025 17:48:44 +0300 Message-ID: <6b2f4cb436c98d6342db69e965a5621707b9711f.1756822782.git.leon@kernel.org> X-Mailer: git-send-email 2.50.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @kernel.org) X-ZM-MESSAGEID: 1756824595022124101 Content-Type: text/plain; charset="utf-8" From: Leon Romanovsky Convert the DMA direct mapping functions to accept physical addresses directly instead of page+offset parameters. The functions were already operating on physical addresses internally, so this change eliminates the redundant page-to-physical conversion at the API boundary. The functions dma_direct_map_page() and dma_direct_unmap_page() are renamed to dma_direct_map_phys() and dma_direct_unmap_phys() respectively, with their calling convention changed from (struct page *page, unsigned long offset) to (phys_addr_t phys). Architecture-specific functions arch_dma_map_page_direct() and arch_dma_unmap_page_direct() are similarly renamed to arch_dma_map_phys_direct() and arch_dma_unmap_phys_direct(). The is_pci_p2pdma_page() checks are replaced with DMA_ATTR_MMIO checks to allow integration with dma_direct_map_resource and dma_direct_map_phys() is extended to support MMIO path either. Reviewed-by: Jason Gunthorpe Signed-off-by: Leon Romanovsky --- arch/powerpc/kernel/dma-iommu.c | 4 +-- include/linux/dma-map-ops.h | 8 ++--- kernel/dma/direct.c | 6 ++-- kernel/dma/direct.h | 57 +++++++++++++++++++++------------ kernel/dma/mapping.c | 8 ++--- 5 files changed, 49 insertions(+), 34 deletions(-) diff --git a/arch/powerpc/kernel/dma-iommu.c b/arch/powerpc/kernel/dma-iomm= u.c index 4d64a5db50f3..0359ab72cd3b 100644 --- a/arch/powerpc/kernel/dma-iommu.c +++ b/arch/powerpc/kernel/dma-iommu.c @@ -14,7 +14,7 @@ #define can_map_direct(dev, addr) \ ((dev)->bus_dma_limit >=3D phys_to_dma((dev), (addr))) =20 -bool arch_dma_map_page_direct(struct device *dev, phys_addr_t addr) +bool arch_dma_map_phys_direct(struct device *dev, phys_addr_t addr) { if (likely(!dev->bus_dma_limit)) return false; @@ -24,7 +24,7 @@ bool arch_dma_map_page_direct(struct device *dev, phys_ad= dr_t addr) =20 #define is_direct_handle(dev, h) ((h) >=3D (dev)->archdata.dma_offset) =20 -bool arch_dma_unmap_page_direct(struct device *dev, dma_addr_t dma_handle) +bool arch_dma_unmap_phys_direct(struct device *dev, dma_addr_t dma_handle) { if (likely(!dev->bus_dma_limit)) return false; diff --git a/include/linux/dma-map-ops.h b/include/linux/dma-map-ops.h index f48e5fb88bd5..71f5b3025415 100644 --- a/include/linux/dma-map-ops.h +++ b/include/linux/dma-map-ops.h @@ -392,15 +392,15 @@ void *arch_dma_set_uncached(void *addr, size_t size); void arch_dma_clear_uncached(void *addr, size_t size); =20 #ifdef CONFIG_ARCH_HAS_DMA_MAP_DIRECT -bool arch_dma_map_page_direct(struct device *dev, phys_addr_t addr); -bool arch_dma_unmap_page_direct(struct device *dev, dma_addr_t dma_handle); +bool arch_dma_map_phys_direct(struct device *dev, phys_addr_t addr); +bool arch_dma_unmap_phys_direct(struct device *dev, dma_addr_t dma_handle); bool arch_dma_map_sg_direct(struct device *dev, struct scatterlist *sg, int nents); bool arch_dma_unmap_sg_direct(struct device *dev, struct scatterlist *sg, int nents); #else -#define arch_dma_map_page_direct(d, a) (false) -#define arch_dma_unmap_page_direct(d, a) (false) +#define arch_dma_map_phys_direct(d, a) (false) +#define arch_dma_unmap_phys_direct(d, a) (false) #define arch_dma_map_sg_direct(d, s, n) (false) #define arch_dma_unmap_sg_direct(d, s, n) (false) #endif diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index 24c359d9c879..fa75e3070073 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -453,7 +453,7 @@ void dma_direct_unmap_sg(struct device *dev, struct sca= tterlist *sgl, if (sg_dma_is_bus_address(sg)) sg_dma_unmark_bus_address(sg); else - dma_direct_unmap_page(dev, sg->dma_address, + dma_direct_unmap_phys(dev, sg->dma_address, sg_dma_len(sg), dir, attrs); } } @@ -476,8 +476,8 @@ int dma_direct_map_sg(struct device *dev, struct scatte= rlist *sgl, int nents, */ break; case PCI_P2PDMA_MAP_NONE: - sg->dma_address =3D dma_direct_map_page(dev, sg_page(sg), - sg->offset, sg->length, dir, attrs); + sg->dma_address =3D dma_direct_map_phys(dev, sg_phys(sg), + sg->length, dir, attrs); if (sg->dma_address =3D=3D DMA_MAPPING_ERROR) { ret =3D -EIO; goto out_unmap; diff --git a/kernel/dma/direct.h b/kernel/dma/direct.h index d2c0b7e632fc..3f4792910604 100644 --- a/kernel/dma/direct.h +++ b/kernel/dma/direct.h @@ -80,42 +80,57 @@ static inline void dma_direct_sync_single_for_cpu(struc= t device *dev, arch_dma_mark_clean(paddr, size); } =20 -static inline dma_addr_t dma_direct_map_page(struct device *dev, - struct page *page, unsigned long offset, size_t size, - enum dma_data_direction dir, unsigned long attrs) +static inline dma_addr_t dma_direct_map_phys(struct device *dev, + phys_addr_t phys, size_t size, enum dma_data_direction dir, + unsigned long attrs) { - phys_addr_t phys =3D page_to_phys(page) + offset; - dma_addr_t dma_addr =3D phys_to_dma(dev, phys); + dma_addr_t dma_addr; =20 if (is_swiotlb_force_bounce(dev)) { - if (is_pci_p2pdma_page(page)) - return DMA_MAPPING_ERROR; + if (attrs & DMA_ATTR_MMIO) + goto err_overflow; + return swiotlb_map(dev, phys, size, dir, attrs); } =20 - if (unlikely(!dma_capable(dev, dma_addr, size, true)) || - dma_kmalloc_needs_bounce(dev, size, dir)) { - if (is_pci_p2pdma_page(page)) - return DMA_MAPPING_ERROR; - if (is_swiotlb_active(dev)) - return swiotlb_map(dev, phys, size, dir, attrs); - - dev_WARN_ONCE(dev, 1, - "DMA addr %pad+%zu overflow (mask %llx, bus limit %llx).\n", - &dma_addr, size, *dev->dma_mask, dev->bus_dma_limit); - return DMA_MAPPING_ERROR; + if (attrs & DMA_ATTR_MMIO) { + dma_addr =3D phys; + if (unlikely(dma_capable(dev, dma_addr, size, false))) + goto err_overflow; + } else { + dma_addr =3D phys_to_dma(dev, phys); + if (unlikely(!dma_capable(dev, dma_addr, size, true)) || + dma_kmalloc_needs_bounce(dev, size, dir)) { + if (is_swiotlb_active(dev)) + return swiotlb_map(dev, phys, size, dir, attrs); + + goto err_overflow; + } } =20 - if (!dev_is_dma_coherent(dev) && !(attrs & DMA_ATTR_SKIP_CPU_SYNC)) + if (!dev_is_dma_coherent(dev) && + !(attrs & (DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_MMIO))) arch_sync_dma_for_device(phys, size, dir); return dma_addr; + +err_overflow: + dev_WARN_ONCE( + dev, 1, + "DMA addr %pad+%zu overflow (mask %llx, bus limit %llx).\n", + &dma_addr, size, *dev->dma_mask, dev->bus_dma_limit); + return DMA_MAPPING_ERROR; } =20 -static inline void dma_direct_unmap_page(struct device *dev, dma_addr_t ad= dr, +static inline void dma_direct_unmap_phys(struct device *dev, dma_addr_t ad= dr, size_t size, enum dma_data_direction dir, unsigned long attrs) { - phys_addr_t phys =3D dma_to_phys(dev, addr); + phys_addr_t phys; + + if (attrs & DMA_ATTR_MMIO) + /* nothing to do: uncached and no swiotlb */ + return; =20 + phys =3D dma_to_phys(dev, addr); if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC)) dma_direct_sync_single_for_cpu(dev, addr, size, dir); =20 diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c index 58482536db9b..80481a873340 100644 --- a/kernel/dma/mapping.c +++ b/kernel/dma/mapping.c @@ -166,8 +166,8 @@ dma_addr_t dma_map_page_attrs(struct device *dev, struc= t page *page, return DMA_MAPPING_ERROR; =20 if (dma_map_direct(dev, ops) || - arch_dma_map_page_direct(dev, phys + size)) - addr =3D dma_direct_map_page(dev, page, offset, size, dir, attrs); + arch_dma_map_phys_direct(dev, phys + size)) + addr =3D dma_direct_map_phys(dev, phys, size, dir, attrs); else if (use_dma_iommu(dev)) addr =3D iommu_dma_map_phys(dev, phys, size, dir, attrs); else @@ -187,8 +187,8 @@ void dma_unmap_page_attrs(struct device *dev, dma_addr_= t addr, size_t size, =20 BUG_ON(!valid_dma_direction(dir)); if (dma_map_direct(dev, ops) || - arch_dma_unmap_page_direct(dev, addr + size)) - dma_direct_unmap_page(dev, addr, size, dir, attrs); + arch_dma_unmap_phys_direct(dev, addr + size)) + dma_direct_unmap_phys(dev, addr, size, dir, attrs); else if (use_dma_iommu(dev)) iommu_dma_unmap_phys(dev, addr, size, dir, attrs); else --=20 2.50.1 From nobody Fri Oct 3 10:10:32 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=quarantine dis=none) header.from=kernel.org ARC-Seal: i=1; a=rsa-sha256; t=1756824594; cv=none; d=zohomail.com; s=zohoarc; b=XB6eMWWEodPu+26PQcgssVE29OwN7QUDAYipDj/DMaMDYfAOmu4/SSTcFZbEhFIEpeI7+UjqYFtk3bUgQ1C8fSlvqMi5jHqEHdAi5bndnksulit8yVffBIN3+vAfVNNM1hj9oNd9R+af+1AcMW5aAwPiZXiF5KLbYv5L74YhEEk= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1756824594; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=8Vl1Om3dLVLF61K9r5/nvBAzYD5mRA5wX9zobMvy048=; b=YbEYlXCuJ65BwyjW0TCrX0AyjWBeWoS0Z2XCKVA8EunfIs6Da3+C0jUx3x4+0sO1HEBsLVqH4Kq0vSyDrKc1nnhnjiQZYP9COYNDk5Uove0UptxjdAFNVIXnJ5smXO0G5qtnzO/4rzO0ZsX1iSKYQDHQ8dTsM+hIjb6a1V3Lkl0= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=quarantine dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1756824594729396.23282279631223; Tue, 2 Sep 2025 07:49:54 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.1106728.1457397 (Exim 4.92) (envelope-from ) id 1utSKK-0004m6-RS; Tue, 02 Sep 2025 14:49:40 +0000 Received: by outflank-mailman (output) from mailman id 1106728.1457397; Tue, 02 Sep 2025 14:49:40 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1utSKK-0004lx-NG; Tue, 02 Sep 2025 14:49:40 +0000 Received: by outflank-mailman (input) for mailman id 1106728; Tue, 02 Sep 2025 14:49:40 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1utSKJ-00028O-Vk for xen-devel@lists.xenproject.org; Tue, 02 Sep 2025 14:49:39 +0000 Received: from tor.source.kernel.org (tor.source.kernel.org [172.105.4.254]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 0bda40ed-880c-11f0-8dd7-1b34d833f44b; Tue, 02 Sep 2025 16:49:37 +0200 (CEST) Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by tor.source.kernel.org (Postfix) with ESMTP id 9473A60222; Tue, 2 Sep 2025 14:49:36 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 98CE2C4CEED; Tue, 2 Sep 2025 14:49:35 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 0bda40ed-880c-11f0-8dd7-1b34d833f44b DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1756824576; bh=sM+8A+mz8+vtMpvBupjrACoA7oSb6yM+3fTTBdtuWbE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=CCnOPMO8OIfrdGQAw+IhIZhMZWCRKoDf4dQGPY29Dd/hDfXBfnF+b+xOpOG4qpI+J AHBThBcabS43lQECzg9G+ilemXSQ+Y8oPbO5KYb+db4yDJGohbLypwTIReUbV3pDt7 b1zwjJ9TpAgB3K3eJ24kdhUiG+Wng+01rXNX+6t6LyxkcHwvqQWKSM1yPPkKgtMVlh yn6Nuz2D3XZYCPtmAwCWYcE1iUpyYvDpJJafVUMXys4IVp1R9kAJAx1wKdWT2kOeAm 9vHzCM0WTGj6hP77HR+YyjtPFUvn0xAAW8QrPmD6XiDoAW9WhmKB8bAWqfa+hYHi6k B/FMTB65CPigw== From: Leon Romanovsky To: Marek Szyprowski Cc: Leon Romanovsky , Jason Gunthorpe , Abdiel Janulgue , Alexander Potapenko , Alex Gaynor , Andrew Morton , Christoph Hellwig , Danilo Krummrich , David Hildenbrand , iommu@lists.linux.dev, Jason Wang , Jens Axboe , Joerg Roedel , Jonathan Corbet , Juergen Gross , kasan-dev@googlegroups.com, Keith Busch , linux-block@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-nvme@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-trace-kernel@vger.kernel.org, Madhavan Srinivasan , Masami Hiramatsu , Michael Ellerman , "Michael S. Tsirkin" , Miguel Ojeda , Robin Murphy , rust-for-linux@vger.kernel.org, Sagi Grimberg , Stefano Stabellini , Steven Rostedt , virtualization@lists.linux.dev, Will Deacon , xen-devel@lists.xenproject.org Subject: [PATCH v5 08/16] kmsan: convert kmsan_handle_dma to use physical addresses Date: Tue, 2 Sep 2025 17:48:45 +0300 Message-ID: <9f59c7c5ca21b39cdc90696f270ec6b04c92abf6.1756822782.git.leon@kernel.org> X-Mailer: git-send-email 2.50.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @kernel.org) X-ZM-MESSAGEID: 1756824595302116600 Content-Type: text/plain; charset="utf-8" From: Leon Romanovsky Convert the KMSAN DMA handling function from page-based to physical address-based interface. The refactoring renames kmsan_handle_dma() parameters from accepting (struct page *page, size_t offset, size_t size) to (phys_addr_t phys, size_t size). The existing semantics where callers are expected to provide only kmap memory is continued here. Reviewed-by: Jason Gunthorpe Signed-off-by: Leon Romanovsky --- drivers/virtio/virtio_ring.c | 4 ++-- include/linux/kmsan.h | 9 ++++----- kernel/dma/mapping.c | 3 ++- mm/kmsan/hooks.c | 8 +++++--- tools/virtio/linux/kmsan.h | 2 +- 5 files changed, 14 insertions(+), 12 deletions(-) diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c index f5062061c408..c147145a6593 100644 --- a/drivers/virtio/virtio_ring.c +++ b/drivers/virtio/virtio_ring.c @@ -378,7 +378,7 @@ static int vring_map_one_sg(const struct vring_virtqueu= e *vq, struct scatterlist * is initialized by the hardware. Explicitly check/unpoison it * depending on the direction. */ - kmsan_handle_dma(sg_page(sg), sg->offset, sg->length, direction); + kmsan_handle_dma(sg_phys(sg), sg->length, direction); *addr =3D (dma_addr_t)sg_phys(sg); return 0; } @@ -3157,7 +3157,7 @@ dma_addr_t virtqueue_dma_map_single_attrs(struct virt= queue *_vq, void *ptr, struct vring_virtqueue *vq =3D to_vvq(_vq); =20 if (!vq->use_dma_api) { - kmsan_handle_dma(virt_to_page(ptr), offset_in_page(ptr), size, dir); + kmsan_handle_dma(virt_to_phys(ptr), size, dir); return (dma_addr_t)virt_to_phys(ptr); } =20 diff --git a/include/linux/kmsan.h b/include/linux/kmsan.h index 2b1432cc16d5..f2fd221107bb 100644 --- a/include/linux/kmsan.h +++ b/include/linux/kmsan.h @@ -182,8 +182,7 @@ void kmsan_iounmap_page_range(unsigned long start, unsi= gned long end); =20 /** * kmsan_handle_dma() - Handle a DMA data transfer. - * @page: first page of the buffer. - * @offset: offset of the buffer within the first page. + * @phys: physical address of the buffer. * @size: buffer size. * @dir: one of possible dma_data_direction values. * @@ -192,7 +191,7 @@ void kmsan_iounmap_page_range(unsigned long start, unsi= gned long end); * * initializes the buffer, if it is copied from device; * * does both, if this is a DMA_BIDIRECTIONAL transfer. */ -void kmsan_handle_dma(struct page *page, size_t offset, size_t size, +void kmsan_handle_dma(phys_addr_t phys, size_t size, enum dma_data_direction dir); =20 /** @@ -372,8 +371,8 @@ static inline void kmsan_iounmap_page_range(unsigned lo= ng start, { } =20 -static inline void kmsan_handle_dma(struct page *page, size_t offset, - size_t size, enum dma_data_direction dir) +static inline void kmsan_handle_dma(phys_addr_t phys, size_t size, + enum dma_data_direction dir) { } =20 diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c index 80481a873340..891e1fc3e582 100644 --- a/kernel/dma/mapping.c +++ b/kernel/dma/mapping.c @@ -172,7 +172,8 @@ dma_addr_t dma_map_page_attrs(struct device *dev, struc= t page *page, addr =3D iommu_dma_map_phys(dev, phys, size, dir, attrs); else addr =3D ops->map_page(dev, page, offset, size, dir, attrs); - kmsan_handle_dma(page, offset, size, dir); + + kmsan_handle_dma(phys, size, dir); trace_dma_map_phys(dev, phys, addr, size, dir, attrs); debug_dma_map_phys(dev, phys, size, dir, addr, attrs); =20 diff --git a/mm/kmsan/hooks.c b/mm/kmsan/hooks.c index 97de3d6194f0..ea6d1de19ede 100644 --- a/mm/kmsan/hooks.c +++ b/mm/kmsan/hooks.c @@ -336,14 +336,16 @@ static void kmsan_handle_dma_page(const void *addr, s= ize_t size, } =20 /* Helper function to handle DMA data transfers. */ -void kmsan_handle_dma(struct page *page, size_t offset, size_t size, +void kmsan_handle_dma(phys_addr_t phys, size_t size, enum dma_data_direction dir) { - u64 page_offset, to_go, addr; + struct page *page =3D phys_to_page(phys); + u64 page_offset, to_go; + void *addr; =20 if (PageHighMem(page)) return; - addr =3D (u64)page_address(page) + offset; + addr =3D page_to_virt(page); /* * The kernel may occasionally give us adjacent DMA pages not belonging * to the same allocation. Process them separately to avoid triggering diff --git a/tools/virtio/linux/kmsan.h b/tools/virtio/linux/kmsan.h index 272b5aa285d5..6cd2e3efd03d 100644 --- a/tools/virtio/linux/kmsan.h +++ b/tools/virtio/linux/kmsan.h @@ -4,7 +4,7 @@ =20 #include =20 -inline void kmsan_handle_dma(struct page *page, size_t offset, size_t size, +inline void kmsan_handle_dma(phys_addr_t phys, size_t size, enum dma_data_direction dir) { } --=20 2.50.1 From nobody Fri Oct 3 10:10:32 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D5FFA32ED58; Tue, 2 Sep 2025 14:49:40 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1756824581; cv=none; b=jwuYEFMyseJeFNZ4ujAOrinzhnUc4lWn/5R95ulvYIa6kpC4OjOzzw8dwXZLwEez+/1S0sRbnDaUY6CGTLg3PRV9VZnj1N6EyAOd5DWRfbNSntD/Xp5ZcoG8HbK/e3E/FXKV0PhSPvOIbmPzJwmo0ovotyxyRbD5QlLnaVkfc2k= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1756824581; c=relaxed/simple; bh=RB2ksoieyLntDs7HJ/RFuf6vwP6H3wdrcrPesmw+pyc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=t2+dwgEaBrGErzTHt+aqbx7dHwUNSFFmyZm8IetlGbijNktryNFPNPcao5zFUY2Yg/Vw1xqOyM83kIZQ07f7+7z3NHpAr1r0eYOOhrTENcu4QCC0mYT/2L3oN+m17iTdlmyNNgB8v1hmybDa7k9YvlHz7ypuVhrNkGHZtedUxGk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=JZSrHfV5; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="JZSrHfV5" Received: by smtp.kernel.org (Postfix) with ESMTPSA id B593DC4CEED; Tue, 2 Sep 2025 14:49:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1756824580; bh=RB2ksoieyLntDs7HJ/RFuf6vwP6H3wdrcrPesmw+pyc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=JZSrHfV5RjgP6FwDhcPDrLH1lW/wS4YlLapHJysxuEeYKBdTeEA7mwHWta+DUwvDK 8usdO9pm0G9OJcPH2r6xdHblIJrIO415RfC5Jo9IYcWrcDkjpHn2FwnkLHPhIz23St wXHYoacDeswL+oFmO249wc5MuSyna3BF+sSD/rKFuhG8BkS8Swzg+mJ44CRMdKCVjJ JT5Y2NQNPc+7xnLzw9pSQotk1nGvp4p1upES9s3UQSikhPHPq/nquZmZaXUq0AprZi s3B+kTchlbCY1zBP+cxrbq6M9+mc7YC1+9T3u/HEsQcoDp2oijrmaAzD/IHheWAhET 7nxwjSwvkkL9Q== From: Leon Romanovsky To: Marek Szyprowski Cc: Leon Romanovsky , Jason Gunthorpe , Abdiel Janulgue , Alexander Potapenko , Alex Gaynor , Andrew Morton , Christoph Hellwig , Danilo Krummrich , David Hildenbrand , iommu@lists.linux.dev, Jason Wang , Jens Axboe , Joerg Roedel , Jonathan Corbet , Juergen Gross , kasan-dev@googlegroups.com, Keith Busch , linux-block@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-nvme@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-trace-kernel@vger.kernel.org, Madhavan Srinivasan , Masami Hiramatsu , Michael Ellerman , "Michael S. Tsirkin" , Miguel Ojeda , Robin Murphy , rust-for-linux@vger.kernel.org, Sagi Grimberg , Stefano Stabellini , Steven Rostedt , virtualization@lists.linux.dev, Will Deacon , xen-devel@lists.xenproject.org Subject: [PATCH v5 09/16] dma-mapping: implement DMA_ATTR_MMIO for dma_(un)map_page_attrs() Date: Tue, 2 Sep 2025 17:48:46 +0300 Message-ID: <098a7aace5780f8ad504ce021e7731dfe1f82dca.1756822782.git.leon@kernel.org> X-Mailer: git-send-email 2.50.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Leon Romanovsky Make dma_map_page_attrs() and dma_map_page_attrs() respect DMA_ATTR_MMIO. DMA_ATR_MMIO makes the functions behave the same as dma_(un)map_resource(): - No swiotlb is possible - Legacy dma_ops arches use ops->map_resource() - No kmsan - No arch_dma_map_phys_direct() The prior patches have made the internal functions called here support DMA_ATTR_MMIO. This is also preparation for turning dma_map_resource() into an inline calling dma_map_phys(DMA_ATTR_MMIO) to consolidate the flows. Reviewed-by: Jason Gunthorpe Signed-off-by: Leon Romanovsky --- kernel/dma/mapping.c | 26 +++++++++++++++++++++----- 1 file changed, 21 insertions(+), 5 deletions(-) diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c index 891e1fc3e582..fdabfdaeff1d 100644 --- a/kernel/dma/mapping.c +++ b/kernel/dma/mapping.c @@ -158,6 +158,7 @@ dma_addr_t dma_map_page_attrs(struct device *dev, struc= t page *page, { const struct dma_map_ops *ops =3D get_dma_ops(dev); phys_addr_t phys =3D page_to_phys(page) + offset; + bool is_mmio =3D attrs & DMA_ATTR_MMIO; dma_addr_t addr; =20 BUG_ON(!valid_dma_direction(dir)); @@ -166,14 +167,25 @@ dma_addr_t dma_map_page_attrs(struct device *dev, str= uct page *page, return DMA_MAPPING_ERROR; =20 if (dma_map_direct(dev, ops) || - arch_dma_map_phys_direct(dev, phys + size)) + (!is_mmio && arch_dma_map_phys_direct(dev, phys + size))) addr =3D dma_direct_map_phys(dev, phys, size, dir, attrs); else if (use_dma_iommu(dev)) addr =3D iommu_dma_map_phys(dev, phys, size, dir, attrs); - else + else if (is_mmio) { + if (!ops->map_resource) + return DMA_MAPPING_ERROR; + + addr =3D ops->map_resource(dev, phys, size, dir, attrs); + } else { + /* + * The dma_ops API contract for ops->map_page() requires + * kmappable memory, while ops->map_resource() does not. + */ addr =3D ops->map_page(dev, page, offset, size, dir, attrs); + } =20 - kmsan_handle_dma(phys, size, dir); + if (!is_mmio) + kmsan_handle_dma(phys, size, dir); trace_dma_map_phys(dev, phys, addr, size, dir, attrs); debug_dma_map_phys(dev, phys, size, dir, addr, attrs); =20 @@ -185,14 +197,18 @@ void dma_unmap_page_attrs(struct device *dev, dma_add= r_t addr, size_t size, enum dma_data_direction dir, unsigned long attrs) { const struct dma_map_ops *ops =3D get_dma_ops(dev); + bool is_mmio =3D attrs & DMA_ATTR_MMIO; =20 BUG_ON(!valid_dma_direction(dir)); if (dma_map_direct(dev, ops) || - arch_dma_unmap_phys_direct(dev, addr + size)) + (!is_mmio && arch_dma_unmap_phys_direct(dev, addr + size))) dma_direct_unmap_phys(dev, addr, size, dir, attrs); else if (use_dma_iommu(dev)) iommu_dma_unmap_phys(dev, addr, size, dir, attrs); - else + else if (is_mmio) { + if (ops->unmap_resource) + ops->unmap_resource(dev, addr, size, dir, attrs); + } else ops->unmap_page(dev, addr, size, dir, attrs); trace_dma_unmap_phys(dev, addr, size, dir, attrs); debug_dma_unmap_phys(dev, addr, size, dir); --=20 2.50.1 From nobody Fri Oct 3 10:10:32 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E2B9931CA78; Tue, 2 Sep 2025 14:49:57 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1756824598; cv=none; b=TaE0ipf7TH4E7iiv4SWYmsHnJGLjF3x6cYkv+eLUeL3WU4s0pcyhky5T0mN+Yd6B3a7LbYDAw9zMheIZiuiC/4pDXQuuca62PFxLZKINevwtdspNYC8LW649OAG/lcfXikKMDl1uZz3H3zU0jpwo8vZMV5i5Rq9LGVFpt+1/OxI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1756824598; c=relaxed/simple; bh=/+yFJK+B9QWHuc4E4Qjbxe4FKpOF4Mx7ckvNMqksRwM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=nym5rnXD9SEzZRUcc3sBv3eO1rmRI2zud6tGoXD20GxKGIuumMLU+rP7uYA2l8TApK9zlNBEcv2SMNLPFgmfDmAt0XL0R0zt2ITXDrl/YMp54sh7xa4C86S0yq/IU/aJqoXy8weIUZRZGLeWCneFcb1di1kxLThQWYeBu9ctzT4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=eiIBn3T8; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="eiIBn3T8" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 98E4CC4CEED; Tue, 2 Sep 2025 14:49:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1756824597; bh=/+yFJK+B9QWHuc4E4Qjbxe4FKpOF4Mx7ckvNMqksRwM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=eiIBn3T8WJVWsk3+I+fWd3PcxI0MCgzuCoUFUQ6X0tpaWBWbgKoyqfqOHBiWDIDiz 8p2HzbdfWqrQ60dofPqEdpLkK9IHTbg+v0SJZIYuCE/Og1VxxZlScreKdr1eSpD3sv /lx+bgPY7bT+Oe8q5/ipZoWvP5MZL/8U72Kogd0uDRyxsAmDtlwDjYXaLUb/vJvMBs v49CTWymtxxZQxWHK2HkfHOmNrocouCJRvv5qbRPaGPjvko9idbzbdGVkH62OwjqKI vcrbLiVodBbFcdGUzIrgeZ2fajxcWR0pq3lieMm1b99+a2xYHhXEQceE1aYKlKJK7C vmv2Md05FkfjQ== From: Leon Romanovsky To: Marek Szyprowski Cc: Leon Romanovsky , Jason Gunthorpe , Abdiel Janulgue , Alexander Potapenko , Alex Gaynor , Andrew Morton , Christoph Hellwig , Danilo Krummrich , David Hildenbrand , iommu@lists.linux.dev, Jason Wang , Jens Axboe , Joerg Roedel , Jonathan Corbet , Juergen Gross , kasan-dev@googlegroups.com, Keith Busch , linux-block@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-nvme@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-trace-kernel@vger.kernel.org, Madhavan Srinivasan , Masami Hiramatsu , Michael Ellerman , "Michael S. Tsirkin" , Miguel Ojeda , Robin Murphy , rust-for-linux@vger.kernel.org, Sagi Grimberg , Stefano Stabellini , Steven Rostedt , virtualization@lists.linux.dev, Will Deacon , xen-devel@lists.xenproject.org Subject: [PATCH v5 10/16] xen: swiotlb: Open code map_resource callback Date: Tue, 2 Sep 2025 17:48:47 +0300 Message-ID: <7e3225a24df41b483d60d87450b610b399bc15ca.1756822782.git.leon@kernel.org> X-Mailer: git-send-email 2.50.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Leon Romanovsky General dma_direct_map_resource() is going to be removed in next patch, so simply open-code it in xen driver. Reviewed-by: Juergen Gross Reviewed-by: Jason Gunthorpe Signed-off-by: Leon Romanovsky --- drivers/xen/swiotlb-xen.c | 21 ++++++++++++++++++++- 1 file changed, 20 insertions(+), 1 deletion(-) diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c index da1a7d3d377c..dd7747a2de87 100644 --- a/drivers/xen/swiotlb-xen.c +++ b/drivers/xen/swiotlb-xen.c @@ -392,6 +392,25 @@ xen_swiotlb_sync_sg_for_device(struct device *dev, str= uct scatterlist *sgl, } } =20 +static dma_addr_t xen_swiotlb_direct_map_resource(struct device *dev, + phys_addr_t paddr, + size_t size, + enum dma_data_direction dir, + unsigned long attrs) +{ + dma_addr_t dma_addr =3D paddr; + + if (unlikely(!dma_capable(dev, dma_addr, size, false))) { + dev_err_once(dev, + "DMA addr %pad+%zu overflow (mask %llx, bus limit %llx).\n", + &dma_addr, size, *dev->dma_mask, dev->bus_dma_limit); + WARN_ON_ONCE(1); + return DMA_MAPPING_ERROR; + } + + return dma_addr; +} + /* * Return whether the given device DMA address mask can be supported * properly. For example, if your device can only drive the low 24-bits @@ -426,5 +445,5 @@ const struct dma_map_ops xen_swiotlb_dma_ops =3D { .alloc_pages_op =3D dma_common_alloc_pages, .free_pages =3D dma_common_free_pages, .max_mapping_size =3D swiotlb_max_mapping_size, - .map_resource =3D dma_direct_map_resource, + .map_resource =3D xen_swiotlb_direct_map_resource, }; --=20 2.50.1 From nobody Fri Oct 3 10:10:32 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4235733A039; Tue, 2 Sep 2025 14:49:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1756824589; cv=none; b=OaeNuikt3aPaXivbMZcHgKdQM8K6IGhptRO6hr8DEMYwSfmqAPapg+P25r2U7L34RPUHJQKvStTECXQOpMOrbBzoaibDDHne2cyglukPkCElvq/TLcFJ8yk5qVhX1KKwPuHAZ84jSzMjL47ynI3I2PeC5xiJV6WXx8kkpLFlyyg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1756824589; c=relaxed/simple; bh=/HCl3HOvsWTzCarzVdlMD7f/4v3gPsaMJ10z4KcUXMM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=f5Ecz32N4jw1lJM/K8GsvHfuintTaDhKcF39tN4ohfouFiHBjeRI1mVTsFZS9JuE0EVSrXFcKAAWvhPdpA0MsFurwpg6bDQexOmF5qn3/kDqqxlGpHpuWdeFYCZp0kUANZ9AoF5qrQT0+ga/ggcn8oNgksoX0RgudPdh1Gj/Mdw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=bZYcAtLw; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="bZYcAtLw" Received: by smtp.kernel.org (Postfix) with ESMTPSA id DECF0C4CEF5; Tue, 2 Sep 2025 14:49:47 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1756824588; bh=/HCl3HOvsWTzCarzVdlMD7f/4v3gPsaMJ10z4KcUXMM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=bZYcAtLwu1psF1muMSdmkP65AbKGbWFQCJo2LL567neKML0kAAzPANt3qFbqjcn0N V4gCR2nowx5Lo1PuU6VuJns/qR3sY3EuvXgeME5KBn7G+Ngcyei4RcEh6RxF3dcMHs e8tQmIAgTbW9z4p73SQvPRGCjVdn9Ltb56/HY6beiQu8eETLEUc2fZYysngtqhdfzf i2UueRWa9m5+zUZhtU0eQVmO+iKWmO4oKQF8/sdr+6fbA1WtQwmvoZV/yFOC0PMThg +7OmV7e4QNp2DHGxAMjMtCug23bO/U/hUSiynv/xnEV5BA5T/CHefOymweGEqBVARB TyxUPxpppCEfg== From: Leon Romanovsky To: Marek Szyprowski Cc: Leon Romanovsky , Jason Gunthorpe , Abdiel Janulgue , Alexander Potapenko , Alex Gaynor , Andrew Morton , Christoph Hellwig , Danilo Krummrich , David Hildenbrand , iommu@lists.linux.dev, Jason Wang , Jens Axboe , Joerg Roedel , Jonathan Corbet , Juergen Gross , kasan-dev@googlegroups.com, Keith Busch , linux-block@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-nvme@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-trace-kernel@vger.kernel.org, Madhavan Srinivasan , Masami Hiramatsu , Michael Ellerman , "Michael S. Tsirkin" , Miguel Ojeda , Robin Murphy , rust-for-linux@vger.kernel.org, Sagi Grimberg , Stefano Stabellini , Steven Rostedt , virtualization@lists.linux.dev, Will Deacon , xen-devel@lists.xenproject.org Subject: [PATCH v5 11/16] dma-mapping: export new dma_*map_phys() interface Date: Tue, 2 Sep 2025 17:48:48 +0300 Message-ID: X-Mailer: git-send-email 2.50.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Leon Romanovsky Introduce new DMA mapping functions dma_map_phys() and dma_unmap_phys() that operate directly on physical addresses instead of page+offset parameters. This provides a more efficient interface for drivers that already have physical addresses available. The new functions are implemented as the primary mapping layer, with the existing dma_map_page_attrs()/dma_map_resource() and dma_unmap_page_attrs()/dma_unmap_resource() functions converted to simple wrappers around the phys-based implementations. In case dma_map_page_attrs(), the struct page is converted to physical address with help of page_to_phys() function and dma_map_resource() provides physical address as is together with addition of DMA_ATTR_MMIO attribute. The old page-based API is preserved in mapping.c to ensure that existing code won't be affected by changing EXPORT_SYMBOL to EXPORT_SYMBOL_GPL variant for dma_*map_phys(). Reviewed-by: Jason Gunthorpe Reviewed-by: Keith Busch Signed-off-by: Leon Romanovsky --- drivers/iommu/dma-iommu.c | 14 -------- include/linux/dma-direct.h | 2 -- include/linux/dma-mapping.h | 13 +++++++ include/linux/iommu-dma.h | 4 --- include/trace/events/dma.h | 2 -- kernel/dma/debug.c | 43 ----------------------- kernel/dma/debug.h | 21 ----------- kernel/dma/direct.c | 16 --------- kernel/dma/mapping.c | 69 ++++++++++++++++++++----------------- 9 files changed, 50 insertions(+), 134 deletions(-) diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index 6804aaf034a1..7944a3af4545 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -1556,20 +1556,6 @@ void iommu_dma_unmap_sg(struct device *dev, struct s= catterlist *sg, int nents, __iommu_dma_unmap(dev, start, end - start); } =20 -dma_addr_t iommu_dma_map_resource(struct device *dev, phys_addr_t phys, - size_t size, enum dma_data_direction dir, unsigned long attrs) -{ - return __iommu_dma_map(dev, phys, size, - dma_info_to_prot(dir, false, attrs) | IOMMU_MMIO, - dma_get_mask(dev)); -} - -void iommu_dma_unmap_resource(struct device *dev, dma_addr_t handle, - size_t size, enum dma_data_direction dir, unsigned long attrs) -{ - __iommu_dma_unmap(dev, handle, size); -} - static void __iommu_dma_free(struct device *dev, size_t size, void *cpu_ad= dr) { size_t alloc_size =3D PAGE_ALIGN(size); diff --git a/include/linux/dma-direct.h b/include/linux/dma-direct.h index f3bc0bcd7098..c249912456f9 100644 --- a/include/linux/dma-direct.h +++ b/include/linux/dma-direct.h @@ -149,7 +149,5 @@ void dma_direct_free_pages(struct device *dev, size_t s= ize, struct page *page, dma_addr_t dma_addr, enum dma_data_direction dir); int dma_direct_supported(struct device *dev, u64 mask); -dma_addr_t dma_direct_map_resource(struct device *dev, phys_addr_t paddr, - size_t size, enum dma_data_direction dir, unsigned long attrs); =20 #endif /* _LINUX_DMA_DIRECT_H */ diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h index 4254fd9bdf5d..8248ff9363ee 100644 --- a/include/linux/dma-mapping.h +++ b/include/linux/dma-mapping.h @@ -138,6 +138,10 @@ dma_addr_t dma_map_page_attrs(struct device *dev, stru= ct page *page, unsigned long attrs); void dma_unmap_page_attrs(struct device *dev, dma_addr_t addr, size_t size, enum dma_data_direction dir, unsigned long attrs); +dma_addr_t dma_map_phys(struct device *dev, phys_addr_t phys, size_t size, + enum dma_data_direction dir, unsigned long attrs); +void dma_unmap_phys(struct device *dev, dma_addr_t addr, size_t size, + enum dma_data_direction dir, unsigned long attrs); unsigned int dma_map_sg_attrs(struct device *dev, struct scatterlist *sg, int nents, enum dma_data_direction dir, unsigned long attrs); void dma_unmap_sg_attrs(struct device *dev, struct scatterlist *sg, @@ -192,6 +196,15 @@ static inline void dma_unmap_page_attrs(struct device = *dev, dma_addr_t addr, size_t size, enum dma_data_direction dir, unsigned long attrs) { } +static inline dma_addr_t dma_map_phys(struct device *dev, phys_addr_t phys, + size_t size, enum dma_data_direction dir, unsigned long attrs) +{ + return DMA_MAPPING_ERROR; +} +static inline void dma_unmap_phys(struct device *dev, dma_addr_t addr, + size_t size, enum dma_data_direction dir, unsigned long attrs) +{ +} static inline unsigned int dma_map_sg_attrs(struct device *dev, struct scatterlist *sg, int nents, enum dma_data_direction dir, unsigned long attrs) diff --git a/include/linux/iommu-dma.h b/include/linux/iommu-dma.h index 485bdffed988..a92b3ff9b934 100644 --- a/include/linux/iommu-dma.h +++ b/include/linux/iommu-dma.h @@ -42,10 +42,6 @@ size_t iommu_dma_opt_mapping_size(void); size_t iommu_dma_max_mapping_size(struct device *dev); void iommu_dma_free(struct device *dev, size_t size, void *cpu_addr, dma_addr_t handle, unsigned long attrs); -dma_addr_t iommu_dma_map_resource(struct device *dev, phys_addr_t phys, - size_t size, enum dma_data_direction dir, unsigned long attrs); -void iommu_dma_unmap_resource(struct device *dev, dma_addr_t handle, - size_t size, enum dma_data_direction dir, unsigned long attrs); struct sg_table *iommu_dma_alloc_noncontiguous(struct device *dev, size_t = size, enum dma_data_direction dir, gfp_t gfp, unsigned long attrs); void iommu_dma_free_noncontiguous(struct device *dev, size_t size, diff --git a/include/trace/events/dma.h b/include/trace/events/dma.h index 84416c7d6bfa..5da59fd8121d 100644 --- a/include/trace/events/dma.h +++ b/include/trace/events/dma.h @@ -73,7 +73,6 @@ DEFINE_EVENT(dma_map, name, \ TP_ARGS(dev, phys_addr, dma_addr, size, dir, attrs)) =20 DEFINE_MAP_EVENT(dma_map_phys); -DEFINE_MAP_EVENT(dma_map_resource); =20 DECLARE_EVENT_CLASS(dma_unmap, TP_PROTO(struct device *dev, dma_addr_t addr, size_t size, @@ -111,7 +110,6 @@ DEFINE_EVENT(dma_unmap, name, \ TP_ARGS(dev, addr, size, dir, attrs)) =20 DEFINE_UNMAP_EVENT(dma_unmap_phys); -DEFINE_UNMAP_EVENT(dma_unmap_resource); =20 DECLARE_EVENT_CLASS(dma_alloc_class, TP_PROTO(struct device *dev, void *virt_addr, dma_addr_t dma_addr, diff --git a/kernel/dma/debug.c b/kernel/dma/debug.c index a0b135455119..7f720fe5dc61 100644 --- a/kernel/dma/debug.c +++ b/kernel/dma/debug.c @@ -38,7 +38,6 @@ enum { dma_debug_single, dma_debug_sg, dma_debug_coherent, - dma_debug_resource, dma_debug_phy, }; =20 @@ -141,7 +140,6 @@ static const char *type2name[] =3D { [dma_debug_single] =3D "single", [dma_debug_sg] =3D "scatter-gather", [dma_debug_coherent] =3D "coherent", - [dma_debug_resource] =3D "resource", [dma_debug_phy] =3D "phy", }; =20 @@ -1446,47 +1444,6 @@ void debug_dma_free_coherent(struct device *dev, siz= e_t size, check_unmap(&ref); } =20 -void debug_dma_map_resource(struct device *dev, phys_addr_t addr, size_t s= ize, - int direction, dma_addr_t dma_addr, - unsigned long attrs) -{ - struct dma_debug_entry *entry; - - if (unlikely(dma_debug_disabled())) - return; - - entry =3D dma_entry_alloc(); - if (!entry) - return; - - entry->type =3D dma_debug_resource; - entry->dev =3D dev; - entry->paddr =3D addr; - entry->size =3D size; - entry->dev_addr =3D dma_addr; - entry->direction =3D direction; - entry->map_err_type =3D MAP_ERR_NOT_CHECKED; - - add_dma_entry(entry, attrs); -} - -void debug_dma_unmap_resource(struct device *dev, dma_addr_t dma_addr, - size_t size, int direction) -{ - struct dma_debug_entry ref =3D { - .type =3D dma_debug_resource, - .dev =3D dev, - .dev_addr =3D dma_addr, - .size =3D size, - .direction =3D direction, - }; - - if (unlikely(dma_debug_disabled())) - return; - - check_unmap(&ref); -} - void debug_dma_sync_single_for_cpu(struct device *dev, dma_addr_t dma_hand= le, size_t size, int direction) { diff --git a/kernel/dma/debug.h b/kernel/dma/debug.h index 76adb42bffd5..424b8f912ade 100644 --- a/kernel/dma/debug.h +++ b/kernel/dma/debug.h @@ -30,14 +30,6 @@ extern void debug_dma_alloc_coherent(struct device *dev,= size_t size, extern void debug_dma_free_coherent(struct device *dev, size_t size, void *virt, dma_addr_t addr); =20 -extern void debug_dma_map_resource(struct device *dev, phys_addr_t addr, - size_t size, int direction, - dma_addr_t dma_addr, - unsigned long attrs); - -extern void debug_dma_unmap_resource(struct device *dev, dma_addr_t dma_ad= dr, - size_t size, int direction); - extern void debug_dma_sync_single_for_cpu(struct device *dev, dma_addr_t dma_handle, size_t size, int direction); @@ -88,19 +80,6 @@ static inline void debug_dma_free_coherent(struct device= *dev, size_t size, { } =20 -static inline void debug_dma_map_resource(struct device *dev, phys_addr_t = addr, - size_t size, int direction, - dma_addr_t dma_addr, - unsigned long attrs) -{ -} - -static inline void debug_dma_unmap_resource(struct device *dev, - dma_addr_t dma_addr, size_t size, - int direction) -{ -} - static inline void debug_dma_sync_single_for_cpu(struct device *dev, dma_addr_t dma_handle, size_t size, int direction) diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index fa75e3070073..1062caac47e7 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -502,22 +502,6 @@ int dma_direct_map_sg(struct device *dev, struct scatt= erlist *sgl, int nents, return ret; } =20 -dma_addr_t dma_direct_map_resource(struct device *dev, phys_addr_t paddr, - size_t size, enum dma_data_direction dir, unsigned long attrs) -{ - dma_addr_t dma_addr =3D paddr; - - if (unlikely(!dma_capable(dev, dma_addr, size, false))) { - dev_err_once(dev, - "DMA addr %pad+%zu overflow (mask %llx, bus limit %llx).\n", - &dma_addr, size, *dev->dma_mask, dev->bus_dma_limit); - WARN_ON_ONCE(1); - return DMA_MAPPING_ERROR; - } - - return dma_addr; -} - int dma_direct_get_sgtable(struct device *dev, struct sg_table *sgt, void *cpu_addr, dma_addr_t dma_addr, size_t size, unsigned long attrs) diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c index fdabfdaeff1d..0ca098d2e88d 100644 --- a/kernel/dma/mapping.c +++ b/kernel/dma/mapping.c @@ -152,12 +152,10 @@ static inline bool dma_map_direct(struct device *dev, return dma_go_direct(dev, *dev->dma_mask, ops); } =20 -dma_addr_t dma_map_page_attrs(struct device *dev, struct page *page, - size_t offset, size_t size, enum dma_data_direction dir, - unsigned long attrs) +dma_addr_t dma_map_phys(struct device *dev, phys_addr_t phys, size_t size, + enum dma_data_direction dir, unsigned long attrs) { const struct dma_map_ops *ops =3D get_dma_ops(dev); - phys_addr_t phys =3D page_to_phys(page) + offset; bool is_mmio =3D attrs & DMA_ATTR_MMIO; dma_addr_t addr; =20 @@ -177,6 +175,9 @@ dma_addr_t dma_map_page_attrs(struct device *dev, struc= t page *page, =20 addr =3D ops->map_resource(dev, phys, size, dir, attrs); } else { + struct page *page =3D phys_to_page(phys); + size_t offset =3D offset_in_page(phys); + /* * The dma_ops API contract for ops->map_page() requires * kmappable memory, while ops->map_resource() does not. @@ -191,9 +192,26 @@ dma_addr_t dma_map_page_attrs(struct device *dev, stru= ct page *page, =20 return addr; } +EXPORT_SYMBOL_GPL(dma_map_phys); + +dma_addr_t dma_map_page_attrs(struct device *dev, struct page *page, + size_t offset, size_t size, enum dma_data_direction dir, + unsigned long attrs) +{ + phys_addr_t phys =3D page_to_phys(page) + offset; + + if (unlikely(attrs & DMA_ATTR_MMIO)) + return DMA_MAPPING_ERROR; + + if (IS_ENABLED(CONFIG_DMA_API_DEBUG) && + WARN_ON_ONCE(is_zone_device_page(page))) + return DMA_MAPPING_ERROR; + + return dma_map_phys(dev, phys, size, dir, attrs); +} EXPORT_SYMBOL(dma_map_page_attrs); =20 -void dma_unmap_page_attrs(struct device *dev, dma_addr_t addr, size_t size, +void dma_unmap_phys(struct device *dev, dma_addr_t addr, size_t size, enum dma_data_direction dir, unsigned long attrs) { const struct dma_map_ops *ops =3D get_dma_ops(dev); @@ -213,6 +231,16 @@ void dma_unmap_page_attrs(struct device *dev, dma_addr= _t addr, size_t size, trace_dma_unmap_phys(dev, addr, size, dir, attrs); debug_dma_unmap_phys(dev, addr, size, dir); } +EXPORT_SYMBOL_GPL(dma_unmap_phys); + +void dma_unmap_page_attrs(struct device *dev, dma_addr_t addr, size_t size, + enum dma_data_direction dir, unsigned long attrs) +{ + if (unlikely(attrs & DMA_ATTR_MMIO)) + return; + + dma_unmap_phys(dev, addr, size, dir, attrs); +} EXPORT_SYMBOL(dma_unmap_page_attrs); =20 static int __dma_map_sg_attrs(struct device *dev, struct scatterlist *sg, @@ -338,41 +366,18 @@ EXPORT_SYMBOL(dma_unmap_sg_attrs); dma_addr_t dma_map_resource(struct device *dev, phys_addr_t phys_addr, size_t size, enum dma_data_direction dir, unsigned long attrs) { - const struct dma_map_ops *ops =3D get_dma_ops(dev); - dma_addr_t addr =3D DMA_MAPPING_ERROR; - - BUG_ON(!valid_dma_direction(dir)); - - if (WARN_ON_ONCE(!dev->dma_mask)) + if (IS_ENABLED(CONFIG_DMA_API_DEBUG) && + WARN_ON_ONCE(pfn_valid(PHYS_PFN(phys_addr)))) return DMA_MAPPING_ERROR; =20 - if (dma_map_direct(dev, ops)) - addr =3D dma_direct_map_resource(dev, phys_addr, size, dir, attrs); - else if (use_dma_iommu(dev)) - addr =3D iommu_dma_map_resource(dev, phys_addr, size, dir, attrs); - else if (ops->map_resource) - addr =3D ops->map_resource(dev, phys_addr, size, dir, attrs); - - trace_dma_map_resource(dev, phys_addr, addr, size, dir, attrs); - debug_dma_map_resource(dev, phys_addr, size, dir, addr, attrs); - return addr; + return dma_map_phys(dev, phys_addr, size, dir, attrs | DMA_ATTR_MMIO); } EXPORT_SYMBOL(dma_map_resource); =20 void dma_unmap_resource(struct device *dev, dma_addr_t addr, size_t size, enum dma_data_direction dir, unsigned long attrs) { - const struct dma_map_ops *ops =3D get_dma_ops(dev); - - BUG_ON(!valid_dma_direction(dir)); - if (dma_map_direct(dev, ops)) - ; /* nothing to do: uncached and no swiotlb */ - else if (use_dma_iommu(dev)) - iommu_dma_unmap_resource(dev, addr, size, dir, attrs); - else if (ops->unmap_resource) - ops->unmap_resource(dev, addr, size, dir, attrs); - trace_dma_unmap_resource(dev, addr, size, dir, attrs); - debug_dma_unmap_resource(dev, addr, size, dir); + dma_unmap_phys(dev, addr, size, dir, attrs | DMA_ATTR_MMIO); } EXPORT_SYMBOL(dma_unmap_resource); =20 --=20 2.50.1 From nobody Fri Oct 3 10:10:32 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6A897340DAA; Tue, 2 Sep 2025 14:49:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1756824593; cv=none; b=b9mW/HziFgXqbd0YX7ArmAqBrJ9ze9lka7ou543+K03FOCvF8LzsKzt91Z9PeOYQm0ixgYGA3v1/iYesfbqvD3cbT20tNPa2wdeCHPTVmBXhB4jmb2hQcF9iwWcDolHeW8wvx5xTDANHUvcJ82of35nAXAyvKjl7bJcheiEaxAY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1756824593; c=relaxed/simple; bh=O9XZOHlHvjORXmOt4kTIQrFLHTrONFqw1ae8lUbcYw8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=AeIIeZPrHi4gLzVIydSBh7gydiwPokCgxkPhcg0lfvv1aA2FhLUTbtYdYf0L9uxeA8cW0vyU8qR3CglnnStdQyXYLPmrjzn0jDzOW4Zq7Hhn4eGgE7WOrQqB01APkNVySdN513Or8Bm0lnpQvSHF8tn0Z+GnXzV/WPftq4fbYN0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=i4LP/8sF; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="i4LP/8sF" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7E5E9C4CEED; Tue, 2 Sep 2025 14:49:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1756824593; bh=O9XZOHlHvjORXmOt4kTIQrFLHTrONFqw1ae8lUbcYw8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=i4LP/8sFn4GJsZ/wYwZJG+51+koOumcx5kwuswjw9uzDtXu5nJFVYiFdkD5Pd0XDV CJojBBl/IcJFZ1DnDDaW5/OvJIPqgpfNaVsr+q2jlXPVvC5bdwVF7QKxf4Ph4v8jOj 2HjHKIF3azuV23NGrP/L0kmkIoLbU77FsGzLAIB4etR8CjLv/mm1W+Rm9wG7Wi20Qf 9bCfB5IoI0J0AUn4bKcg7P3o/CnocQo4hvP2QVxf1pC/4M65ApXfrRdTznGR7HL9EL UCDZyXx1Bbm4fIIthh7q4BpOhb2peNDAk4t3Me9zhw16A5tgAwshGbvt8aoBahblXj yINBs05SJGpZQ== From: Leon Romanovsky To: Marek Szyprowski Cc: Leon Romanovsky , Jason Gunthorpe , Abdiel Janulgue , Alexander Potapenko , Alex Gaynor , Andrew Morton , Christoph Hellwig , Danilo Krummrich , David Hildenbrand , iommu@lists.linux.dev, Jason Wang , Jens Axboe , Joerg Roedel , Jonathan Corbet , Juergen Gross , kasan-dev@googlegroups.com, Keith Busch , linux-block@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-nvme@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-trace-kernel@vger.kernel.org, Madhavan Srinivasan , Masami Hiramatsu , Michael Ellerman , "Michael S. Tsirkin" , Miguel Ojeda , Robin Murphy , rust-for-linux@vger.kernel.org, Sagi Grimberg , Stefano Stabellini , Steven Rostedt , virtualization@lists.linux.dev, Will Deacon , xen-devel@lists.xenproject.org Subject: [PATCH v5 12/16] mm/hmm: migrate to physical address-based DMA mapping API Date: Tue, 2 Sep 2025 17:48:49 +0300 Message-ID: <90d2f14352494d615d3a5d1251126c88f96a4171.1756822782.git.leon@kernel.org> X-Mailer: git-send-email 2.50.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Leon Romanovsky Convert HMM DMA operations from the legacy page-based API to the new physical address-based dma_map_phys() and dma_unmap_phys() functions. This demonstrates the preferred approach for new code that should use physical addresses directly rather than page+offset parameters. The change replaces dma_map_page() and dma_unmap_page() calls with dma_map_phys() and dma_unmap_phys() respectively, using the physical address that was already available in the code. This eliminates the redundant page-to-physical address conversion and aligns with the DMA subsystem's move toward physical address-centric interfaces. This serves as an example of how new code should be written to leverage the more efficient physical address API, which provides cleaner interfaces for drivers that already have access to physical addresses. Reviewed-by: Jason Gunthorpe Signed-off-by: Leon Romanovsky --- mm/hmm.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/mm/hmm.c b/mm/hmm.c index d545e2494994..015ab243f081 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -775,8 +775,8 @@ dma_addr_t hmm_dma_map_pfn(struct device *dev, struct h= mm_dma_map *map, if (WARN_ON_ONCE(dma_need_unmap(dev) && !dma_addrs)) goto error; =20 - dma_addr =3D dma_map_page(dev, page, 0, map->dma_entry_size, - DMA_BIDIRECTIONAL); + dma_addr =3D dma_map_phys(dev, paddr, map->dma_entry_size, + DMA_BIDIRECTIONAL, 0); if (dma_mapping_error(dev, dma_addr)) goto error; =20 @@ -819,8 +819,8 @@ bool hmm_dma_unmap_pfn(struct device *dev, struct hmm_d= ma_map *map, size_t idx) dma_iova_unlink(dev, state, idx * map->dma_entry_size, map->dma_entry_size, DMA_BIDIRECTIONAL, attrs); } else if (dma_need_unmap(dev)) - dma_unmap_page(dev, dma_addrs[idx], map->dma_entry_size, - DMA_BIDIRECTIONAL); + dma_unmap_phys(dev, dma_addrs[idx], map->dma_entry_size, + DMA_BIDIRECTIONAL, 0); =20 pfns[idx] &=3D ~(HMM_PFN_DMA_MAPPED | HMM_PFN_P2PDMA | HMM_PFN_P2PDMA_BUS); --=20 2.50.1 From nobody Fri Oct 3 10:10:32 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A003B320380; Tue, 2 Sep 2025 14:50:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1756824614; cv=none; b=KewAgUwecNZtnLsoQ/xIuD2ERv6tEkJ70DW+dCuSw20c+dzU+P9FwJvutROQ/dKPU+rHNG6cVXhZL11fNWPV78yjsrHHlqwJ9OrrlyxQkg2LVYFamZK0RYp5L1uBMQRd/+K0qxxVjtVt9xsLGnwV/2JhG5ZVwz5JjTip6HHhrLM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1756824614; c=relaxed/simple; bh=5GnXky7X8jRP4E4M761xyU7kxSqR5w3y9HvcAnisW/0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=DF3LiaLSBetH//9MbFOpK1A2mz5FJEJ2gNZTpoJCYmMl6EofryuiHBesFBpRdwr1iBca99O8y3fD/yTywruqyI1TmrAOh8YaS5HPxdExQ48XHm11s6Wv6if/lDHjH0V2KCYKDH4ZYXdDUAQtaghvErL2Ud6fdISWc/RyPm0t+hs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=rJeu6B43; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="rJeu6B43" Received: by smtp.kernel.org (Postfix) with ESMTPSA id D3108C4CEED; Tue, 2 Sep 2025 14:50:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1756824614; bh=5GnXky7X8jRP4E4M761xyU7kxSqR5w3y9HvcAnisW/0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=rJeu6B43+1PA3hIr12hg0lnVgq8hEaO1Lw0dSu66C4Hxh6OtUnH4qrmq2UzSSZ1N4 l2ZMHW11no8o4RgcofoNHTdHE1yoSWlx22rWWSHOD8+Ogvu4ojzleuOhx12qfasdh8 1xBycpNin9JG/1JisPEuSXMSjGw3Xeby21NvKY8cKczB0oynk2yRys0EmsHljVnB3u KBLTyBq9WFsVfFALrQa4WJtvJqzt56sC8sAjNNKhzQLMvhf/aVz1+461ufH6pc4SF9 +Mv7reAnOosbLyMyMqxFSSvL8KoRwphakYFROv2rFhXteTJEarDn/NkqrxQfsDao9k qBoW5Cs7du2bQ== From: Leon Romanovsky To: Marek Szyprowski Cc: Leon Romanovsky , Jason Gunthorpe , Abdiel Janulgue , Alexander Potapenko , Alex Gaynor , Andrew Morton , Christoph Hellwig , Danilo Krummrich , David Hildenbrand , iommu@lists.linux.dev, Jason Wang , Jens Axboe , Joerg Roedel , Jonathan Corbet , Juergen Gross , kasan-dev@googlegroups.com, Keith Busch , linux-block@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-nvme@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-trace-kernel@vger.kernel.org, Madhavan Srinivasan , Masami Hiramatsu , Michael Ellerman , "Michael S. Tsirkin" , Miguel Ojeda , Robin Murphy , rust-for-linux@vger.kernel.org, Sagi Grimberg , Stefano Stabellini , Steven Rostedt , virtualization@lists.linux.dev, Will Deacon , xen-devel@lists.xenproject.org Subject: [PATCH v5 13/16] mm/hmm: properly take MMIO path Date: Tue, 2 Sep 2025 17:48:50 +0300 Message-ID: <4aac9ae9c0fe39a2e47139fae6d602f71d90bd09.1756822782.git.leon@kernel.org> X-Mailer: git-send-email 2.50.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Leon Romanovsky In case peer-to-peer transaction traverses through host bridge, the IOMMU needs to have IOMMU_MMIO flag, together with skip of CPU sync. The latter was handled by provided DMA_ATTR_SKIP_CPU_SYNC flag, but IOMMU flag was missed, due to assumption that such memory can be treated as regular one. Reuse newly introduced DMA attribute to properly take MMIO path. Reviewed-by: Jason Gunthorpe Signed-off-by: Leon Romanovsky --- mm/hmm.c | 15 ++++++++------- 1 file changed, 8 insertions(+), 7 deletions(-) diff --git a/mm/hmm.c b/mm/hmm.c index 015ab243f081..6556c0e074ba 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -746,7 +746,7 @@ dma_addr_t hmm_dma_map_pfn(struct device *dev, struct h= mm_dma_map *map, case PCI_P2PDMA_MAP_NONE: break; case PCI_P2PDMA_MAP_THRU_HOST_BRIDGE: - attrs |=3D DMA_ATTR_SKIP_CPU_SYNC; + attrs |=3D DMA_ATTR_MMIO; pfns[idx] |=3D HMM_PFN_P2PDMA; break; case PCI_P2PDMA_MAP_BUS_ADDR: @@ -776,7 +776,7 @@ dma_addr_t hmm_dma_map_pfn(struct device *dev, struct h= mm_dma_map *map, goto error; =20 dma_addr =3D dma_map_phys(dev, paddr, map->dma_entry_size, - DMA_BIDIRECTIONAL, 0); + DMA_BIDIRECTIONAL, attrs); if (dma_mapping_error(dev, dma_addr)) goto error; =20 @@ -811,16 +811,17 @@ bool hmm_dma_unmap_pfn(struct device *dev, struct hmm= _dma_map *map, size_t idx) if ((pfns[idx] & valid_dma) !=3D valid_dma) return false; =20 + if (pfns[idx] & HMM_PFN_P2PDMA) + attrs |=3D DMA_ATTR_MMIO; + if (pfns[idx] & HMM_PFN_P2PDMA_BUS) ; /* no need to unmap bus address P2P mappings */ - else if (dma_use_iova(state)) { - if (pfns[idx] & HMM_PFN_P2PDMA) - attrs |=3D DMA_ATTR_SKIP_CPU_SYNC; + else if (dma_use_iova(state)) dma_iova_unlink(dev, state, idx * map->dma_entry_size, map->dma_entry_size, DMA_BIDIRECTIONAL, attrs); - } else if (dma_need_unmap(dev)) + else if (dma_need_unmap(dev)) dma_unmap_phys(dev, dma_addrs[idx], map->dma_entry_size, - DMA_BIDIRECTIONAL, 0); + DMA_BIDIRECTIONAL, attrs); =20 pfns[idx] &=3D ~(HMM_PFN_DMA_MAPPED | HMM_PFN_P2PDMA | HMM_PFN_P2PDMA_BUS); --=20 2.50.1 From nobody Fri Oct 3 10:10:32 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A367A31158C; Tue, 2 Sep 2025 14:50:02 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1756824602; cv=none; b=puCl+kMgbzuaLkCGsYU1GGw3jeSREE9sPdHWNzYW7s/AT8YVNSCWqHIcrzB9ZWtdjBm9SNaz1ZYsbYMNnwKKVkpE9n8k5ecGf6vrONTIHiUw+r/BgSwPX4pdDLwUzajFuytlCHBzMr7uIexVubyoGczedRl4yWFaP7V6fvc3l30= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1756824602; c=relaxed/simple; bh=ZBDCCXKOUbAW2GsG2jVC6suB2pwDgJLZN0OvrJL85zc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=JqdGNNXUm91iaV5OSwqJ9kINNIzJta9nro71/l0khW0C9U6DjXQmKK341evEyL7Xpd3loVdXgT+VUKhcwBisDbyrtrny6+fb7gl7ZjJjYU1+ODopIjAph+R8K4FYFY3lperPXmRKg3LODKoDdLsO5iqs8mbwrThIZTzqaXo6D98= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=lkdauzdp; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="lkdauzdp" Received: by smtp.kernel.org (Postfix) with ESMTPSA id C5010C4CEF5; Tue, 2 Sep 2025 14:50:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1756824602; bh=ZBDCCXKOUbAW2GsG2jVC6suB2pwDgJLZN0OvrJL85zc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=lkdauzdpN/SIpGSSfK8kLRUi86AA3+TOqmqD24ZnSor88UIrlGHjQsAmCWO6e2anl K1lRyv4MmrXkJu2ofOy32MFwkshextQSVPwvtX4cA3dJRuVSZhM7Jx9KBi5eSNonrI z4cGmqRTm+5StjFtRxvIwsoIxLa3y6W46EmimmukeeLvHkuxCFF2Vrt9w4OBWq+PE8 obKDUurvbMwMhMZ1bRG/wLIUXAqpLU+zCaTKDld6pT5B4hrYSp0DUwIX2ynJrtw+wC fv6qY0+duowp3cuGhIqsRETMXtVLq780uF5XI7RuDfeie+OQAT9g3cPsIbtn3Lr6Ky j7H6Q0F41/mKg== From: Leon Romanovsky To: Marek Szyprowski Cc: Leon Romanovsky , Jason Gunthorpe , Abdiel Janulgue , Alexander Potapenko , Alex Gaynor , Andrew Morton , Christoph Hellwig , Danilo Krummrich , David Hildenbrand , iommu@lists.linux.dev, Jason Wang , Jens Axboe , Joerg Roedel , Jonathan Corbet , Juergen Gross , kasan-dev@googlegroups.com, Keith Busch , linux-block@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-nvme@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-trace-kernel@vger.kernel.org, Madhavan Srinivasan , Masami Hiramatsu , Michael Ellerman , "Michael S. Tsirkin" , Miguel Ojeda , Robin Murphy , rust-for-linux@vger.kernel.org, Sagi Grimberg , Stefano Stabellini , Steven Rostedt , virtualization@lists.linux.dev, Will Deacon , xen-devel@lists.xenproject.org Subject: [PATCH v5 14/16] block-dma: migrate to dma_map_phys instead of map_page Date: Tue, 2 Sep 2025 17:48:51 +0300 Message-ID: X-Mailer: git-send-email 2.50.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Leon Romanovsky After introduction of dma_map_phys(), there is no need to convert from physical address to struct page in order to map page. So let's use it directly. Reviewed-by: Keith Busch Signed-off-by: Leon Romanovsky --- block/blk-mq-dma.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/block/blk-mq-dma.c b/block/blk-mq-dma.c index ad283017caef..37e2142be4f7 100644 --- a/block/blk-mq-dma.c +++ b/block/blk-mq-dma.c @@ -87,8 +87,8 @@ static bool blk_dma_map_bus(struct blk_dma_iter *iter, st= ruct phys_vec *vec) static bool blk_dma_map_direct(struct request *req, struct device *dma_dev, struct blk_dma_iter *iter, struct phys_vec *vec) { - iter->addr =3D dma_map_page(dma_dev, phys_to_page(vec->paddr), - offset_in_page(vec->paddr), vec->len, rq_dma_dir(req)); + iter->addr =3D dma_map_phys(dma_dev, vec->paddr, vec->len, + rq_dma_dir(req), 0); if (dma_mapping_error(dma_dev, iter->addr)) { iter->status =3D BLK_STS_RESOURCE; return false; --=20 2.50.1 From nobody Fri Oct 3 10:10:32 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7CD7D31158C; Tue, 2 Sep 2025 14:50:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1756824606; cv=none; b=YXSPJLoLU/f+AiJ5sdTXRhtDL9MJA6inemnkWYeD66KL5hr9NVLgvDJW/7gkR6DKgrRA7k+qOZ9tA/DYVk/F4cU5L/OPwYKzgBK57bs12ph9ndzpOeNL39G11eTOc0LpLFu33hF1OAx+RVw3l7Wpd5e0aWCgyM7GV0zts7Nr91A= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1756824606; c=relaxed/simple; bh=LO26BwKAcM5bzAy5+YIJkrmuLLytzpcl+we8V4Bhs/s=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=tphjHCfpYicQ52fRqwlQyJ8EMXiiKLKJHf1wpLQjjGjbKfBk+OneqYhyKa7qd4tDXmoJFP8jdxCjptck/7QEJZgQxYYmoKDAN0H5S35wrh0tM+fVWeHClgLmMsdyeLiwXUG7eyVYYAKFLeP1/GwU6CZKrJiGDLGi4arHQCjKVFo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=LKY9fKoZ; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="LKY9fKoZ" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 36EC5C4CEED; Tue, 2 Sep 2025 14:50:05 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1756824606; bh=LO26BwKAcM5bzAy5+YIJkrmuLLytzpcl+we8V4Bhs/s=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=LKY9fKoZOkueicaHrUccAoKo/BCx6mgGEeI1cqKthujBHZcr0grk1DET+0qz5Nn3j X04UfMZEYGgOAdyBKbdv3EPfUfaCtaHmX0pkTRXBtHe1eSWe+6/4HHjjh1fCZIrwFI 9u4k+CxQYWnfdDbRqzv9VvChyCpccmAzs4QD4X/Tb26RvsKJ7RFD5Cp//R7boSucJj 9OjWn48Lw0oVFJJbBnM3GtjtMTkhWTqDbczjxsHp85mixuHu1ODHv+b4/ZI3zPt/sf 2V6wheinrYkzZWQgXJQybZfQtO7DZfnbugbPgpjLZrYH99wuatKrAavrSz5zlEbB2g MSo533i8qyD0A== From: Leon Romanovsky To: Marek Szyprowski Cc: Leon Romanovsky , Jason Gunthorpe , Abdiel Janulgue , Alexander Potapenko , Alex Gaynor , Andrew Morton , Christoph Hellwig , Danilo Krummrich , David Hildenbrand , iommu@lists.linux.dev, Jason Wang , Jens Axboe , Joerg Roedel , Jonathan Corbet , Juergen Gross , kasan-dev@googlegroups.com, Keith Busch , linux-block@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-nvme@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-trace-kernel@vger.kernel.org, Madhavan Srinivasan , Masami Hiramatsu , Michael Ellerman , "Michael S. Tsirkin" , Miguel Ojeda , Robin Murphy , rust-for-linux@vger.kernel.org, Sagi Grimberg , Stefano Stabellini , Steven Rostedt , virtualization@lists.linux.dev, Will Deacon , xen-devel@lists.xenproject.org Subject: [PATCH v5 15/16] block-dma: properly take MMIO path Date: Tue, 2 Sep 2025 17:48:52 +0300 Message-ID: X-Mailer: git-send-email 2.50.1 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Leon Romanovsky Make sure that CPU is not synced and IOMMU is configured to take MMIO path by providing newly introduced DMA_ATTR_MMIO attribute. Reviewed-by: Keith Busch Signed-off-by: Leon Romanovsky --- block/blk-mq-dma.c | 13 +++++++++++-- include/linux/blk-mq-dma.h | 6 +++++- include/linux/blk_types.h | 2 ++ 3 files changed, 18 insertions(+), 3 deletions(-) diff --git a/block/blk-mq-dma.c b/block/blk-mq-dma.c index 37e2142be4f7..d415088ed9fd 100644 --- a/block/blk-mq-dma.c +++ b/block/blk-mq-dma.c @@ -87,8 +87,13 @@ static bool blk_dma_map_bus(struct blk_dma_iter *iter, s= truct phys_vec *vec) static bool blk_dma_map_direct(struct request *req, struct device *dma_dev, struct blk_dma_iter *iter, struct phys_vec *vec) { + unsigned int attrs =3D 0; + + if (req->cmd_flags & REQ_MMIO) + attrs =3D DMA_ATTR_MMIO; + iter->addr =3D dma_map_phys(dma_dev, vec->paddr, vec->len, - rq_dma_dir(req), 0); + rq_dma_dir(req), attrs); if (dma_mapping_error(dma_dev, iter->addr)) { iter->status =3D BLK_STS_RESOURCE; return false; @@ -103,14 +108,17 @@ static bool blk_rq_dma_map_iova(struct request *req, = struct device *dma_dev, { enum dma_data_direction dir =3D rq_dma_dir(req); unsigned int mapped =3D 0; + unsigned int attrs =3D 0; int error; =20 iter->addr =3D state->addr; iter->len =3D dma_iova_size(state); + if (req->cmd_flags & REQ_MMIO) + attrs =3D DMA_ATTR_MMIO; =20 do { error =3D dma_iova_link(dma_dev, state, vec->paddr, mapped, - vec->len, dir, 0); + vec->len, dir, attrs); if (error) break; mapped +=3D vec->len; @@ -176,6 +184,7 @@ bool blk_rq_dma_map_iter_start(struct request *req, str= uct device *dma_dev, * same as non-P2P transfers below and during unmap. */ req->cmd_flags &=3D ~REQ_P2PDMA; + req->cmd_flags |=3D REQ_MMIO; break; default: iter->status =3D BLK_STS_INVAL; diff --git a/include/linux/blk-mq-dma.h b/include/linux/blk-mq-dma.h index c26a01aeae00..6c55f5e58511 100644 --- a/include/linux/blk-mq-dma.h +++ b/include/linux/blk-mq-dma.h @@ -48,12 +48,16 @@ static inline bool blk_rq_dma_map_coalesce(struct dma_i= ova_state *state) static inline bool blk_rq_dma_unmap(struct request *req, struct device *dm= a_dev, struct dma_iova_state *state, size_t mapped_len) { + unsigned int attrs =3D 0; + if (req->cmd_flags & REQ_P2PDMA) return true; =20 if (dma_use_iova(state)) { + if (req->cmd_flags & REQ_MMIO) + attrs =3D DMA_ATTR_MMIO; dma_iova_destroy(dma_dev, state, mapped_len, rq_dma_dir(req), - 0); + attrs); return true; } =20 diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h index 09b99d52fd36..283058bcb5b1 100644 --- a/include/linux/blk_types.h +++ b/include/linux/blk_types.h @@ -387,6 +387,7 @@ enum req_flag_bits { __REQ_FS_PRIVATE, /* for file system (submitter) use */ __REQ_ATOMIC, /* for atomic write operations */ __REQ_P2PDMA, /* contains P2P DMA pages */ + __REQ_MMIO, /* contains MMIO memory */ /* * Command specific flags, keep last: */ @@ -420,6 +421,7 @@ enum req_flag_bits { #define REQ_FS_PRIVATE (__force blk_opf_t)(1ULL << __REQ_FS_PRIVATE) #define REQ_ATOMIC (__force blk_opf_t)(1ULL << __REQ_ATOMIC) #define REQ_P2PDMA (__force blk_opf_t)(1ULL << __REQ_P2PDMA) +#define REQ_MMIO (__force blk_opf_t)(1ULL << __REQ_MMIO) =20 #define REQ_NOUNMAP (__force blk_opf_t)(1ULL << __REQ_NOUNMAP) =20 --=20 2.50.1 From nobody Fri Oct 3 10:10:32 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=quarantine dis=none) header.from=kernel.org ARC-Seal: i=1; a=rsa-sha256; t=1756824662; cv=none; d=zohomail.com; s=zohoarc; b=NHqBqLS9Qqc88S89PyGCPRRQmzBI07a8Pi55/F47PlIVIsEXzX2hCBUsQ9hS2Mqfw7PaF15WXzAlXrlA0kBsdPzgEK1Ygs49ToGMaXJaI1VihcWYLp2B/RZQ/WtOyzWpw7r591BWS/vf2U9mUVgbilmmtNDc9/g74aXFoDjQdRE= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1756824662; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=BsyS7NotPIOQVUQ3M3BiMUXsFzrtiEBx/U47oNkGY1A=; b=csXB2n+Cohq5Uismmv0xB6PHawzOMyildg+ZmLdzH3E6nb57PiSx2JA059CNR2eRp33qyAZARWOO9eb/dfZuddIGR7crhhuds9Zb05b9t8NQ5fIgSFPfSjVj69YVJfRQIg1SRYyrca6Eb2sVzv8GVhfMJKJrdWo6usmBxPoGdvY= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=quarantine dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1756824662006441.47725792056315; Tue, 2 Sep 2025 07:51:02 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.1106767.1457406 (Exim 4.92) (envelope-from ) id 1utSLP-0007Gj-9F; Tue, 02 Sep 2025 14:50:47 +0000 Received: by outflank-mailman (output) from mailman id 1106767.1457406; Tue, 02 Sep 2025 14:50:47 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1utSLP-0007Gc-6X; Tue, 02 Sep 2025 14:50:47 +0000 Received: by outflank-mailman (input) for mailman id 1106767; Tue, 02 Sep 2025 14:50:45 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1utSKr-0002MQ-4O for xen-devel@lists.xenproject.org; Tue, 02 Sep 2025 14:50:13 +0000 Received: from sea.source.kernel.org (sea.source.kernel.org [172.234.252.31]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 204bf79e-880c-11f0-8adc-4578a1afcccb; Tue, 02 Sep 2025 16:50:12 +0200 (CEST) Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sea.source.kernel.org (Postfix) with ESMTP id D3F4C41994; Tue, 2 Sep 2025 14:50:10 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id CF152C4CEED; Tue, 2 Sep 2025 14:50:09 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 204bf79e-880c-11f0-8adc-4578a1afcccb DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1756824610; bh=loKqVytyVDIeLrj3FO1P5XlYadT3usoA9sJi/ZhWYE4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=XDqouYHc8vRB0ajJZqK4lwxspBtMW6b4AWggjQsPi5/K2g2R10AjUbWT8azo/7M7O N6Jh6bQk4hBxULYIt59wnr7HzVq+T18QfRotRvkr6SqeRycgt3KrMgeI2f9YoYF65R AMah6TQRO20arumHElmz/9xJv71VS+55KKRTzUhrWxBMRXKYC/ULWECIEily7rCTXe eS9NQqKv9qVWkNYpMk5mwrAgRsMMcEg2+JJNlzJdw/9SSH0fatvocB6pT/Dy/lfKbD W+rnlIzxGxfqLNVSwZwF1T0salbWquRx/dFWfgfMqcj//UlZSSQb08JaE3KYrMoHxH hxEoSoVhdeDtQ== From: Leon Romanovsky To: Marek Szyprowski Cc: Leon Romanovsky , Jason Gunthorpe , Abdiel Janulgue , Alexander Potapenko , Alex Gaynor , Andrew Morton , Christoph Hellwig , Danilo Krummrich , David Hildenbrand , iommu@lists.linux.dev, Jason Wang , Jens Axboe , Joerg Roedel , Jonathan Corbet , Juergen Gross , kasan-dev@googlegroups.com, Keith Busch , linux-block@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-nvme@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-trace-kernel@vger.kernel.org, Madhavan Srinivasan , Masami Hiramatsu , Michael Ellerman , "Michael S. Tsirkin" , Miguel Ojeda , Robin Murphy , rust-for-linux@vger.kernel.org, Sagi Grimberg , Stefano Stabellini , Steven Rostedt , virtualization@lists.linux.dev, Will Deacon , xen-devel@lists.xenproject.org Subject: [PATCH v5 16/16] nvme-pci: unmap MMIO pages with appropriate interface Date: Tue, 2 Sep 2025 17:48:53 +0300 Message-ID: X-Mailer: git-send-email 2.50.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @kernel.org) X-ZM-MESSAGEID: 1756824663713124101 Content-Type: text/plain; charset="utf-8" From: Leon Romanovsky Block layer maps MMIO memory through dma_map_phys() interface with help of DMA_ATTR_MMIO attribute. There is a need to unmap that memory with the appropriate unmap function, something which wasn't possible before adding new REQ attribute to block layer in previous patch. Reviewed-by: Keith Busch Signed-off-by: Leon Romanovsky --- drivers/nvme/host/pci.c | 18 +++++++++++++----- 1 file changed, 13 insertions(+), 5 deletions(-) diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c index 2c6d9506b172..f8ecc0e0f576 100644 --- a/drivers/nvme/host/pci.c +++ b/drivers/nvme/host/pci.c @@ -682,11 +682,15 @@ static void nvme_free_prps(struct request *req) { struct nvme_iod *iod =3D blk_mq_rq_to_pdu(req); struct nvme_queue *nvmeq =3D req->mq_hctx->driver_data; + unsigned int attrs =3D 0; unsigned int i; =20 + if (req->cmd_flags & REQ_MMIO) + attrs =3D DMA_ATTR_MMIO; + for (i =3D 0; i < iod->nr_dma_vecs; i++) - dma_unmap_page(nvmeq->dev->dev, iod->dma_vecs[i].addr, - iod->dma_vecs[i].len, rq_dma_dir(req)); + dma_unmap_phys(nvmeq->dev->dev, iod->dma_vecs[i].addr, + iod->dma_vecs[i].len, rq_dma_dir(req), attrs); mempool_free(iod->dma_vecs, nvmeq->dev->dmavec_mempool); } =20 @@ -699,15 +703,19 @@ static void nvme_free_sgls(struct request *req) unsigned int sqe_dma_len =3D le32_to_cpu(iod->cmd.common.dptr.sgl.length); struct nvme_sgl_desc *sg_list =3D iod->descriptors[0]; enum dma_data_direction dir =3D rq_dma_dir(req); + unsigned int attrs =3D 0; + + if (req->cmd_flags & REQ_MMIO) + attrs =3D DMA_ATTR_MMIO; =20 if (iod->nr_descriptors) { unsigned int nr_entries =3D sqe_dma_len / sizeof(*sg_list), i; =20 for (i =3D 0; i < nr_entries; i++) - dma_unmap_page(dma_dev, le64_to_cpu(sg_list[i].addr), - le32_to_cpu(sg_list[i].length), dir); + dma_unmap_phys(dma_dev, le64_to_cpu(sg_list[i].addr), + le32_to_cpu(sg_list[i].length), dir, attrs); } else { - dma_unmap_page(dma_dev, sqe_dma_addr, sqe_dma_len, dir); + dma_unmap_phys(dma_dev, sqe_dma_addr, sqe_dma_len, dir, attrs); } } =20 --=20 2.50.1