From nobody Sun Oct 5 12:46:17 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=quarantine dis=none) header.from=kernel.org ARC-Seal: i=1; a=rsa-sha256; t=1754311427; cv=none; d=zohomail.com; s=zohoarc; b=gplmTdTLLqtndrXImjPI1qBnE0ZNGZN4HhjZkfUxFxiy/GYuGVIomvAoa9ufh25PMXkjIty01eShhMme7oHfj0bkb0tYfVItpppBF7xhw8EAgePOaMymBesi4tA52jQdwfm/QLazQiZpiEXn00shZ5MnIovYQ0UJF/fBgn//1Ws= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1754311427; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=f/3vSq9381IaXbqW9OZrKrqA9rxmKdWI6yV8YceURS0=; b=D6wVYcHCBZIyMCACuGrLGwrhiV0xH21jKtObzgGgpLdZS3DCPlBFrCgBKZauhcqsOOPQ/AP3/nlBLpNw7nKdCqlu0kQsFL/lxV1KuxTUjnrlcDNfiftgi+dWiiXSGqH76/g1hXtg8DPs9vL4vuy8v+faoiXUJzIeKT3Q0rNbNH8= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=quarantine dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1754311427185852.9060074990946; Mon, 4 Aug 2025 05:43:47 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.1069166.1433015 (Exim 4.92) (envelope-from ) id 1uiuXM-0007IJ-10; Mon, 04 Aug 2025 12:43:32 +0000 Received: by outflank-mailman (output) from mailman id 1069166.1433015; Mon, 04 Aug 2025 12:43:32 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1uiuXL-0007IA-UE; Mon, 04 Aug 2025 12:43:31 +0000 Received: by outflank-mailman (input) for mailman id 1069166; Mon, 04 Aug 2025 12:43:31 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1uiuXL-0006VD-5S for xen-devel@lists.xenproject.org; Mon, 04 Aug 2025 12:43:31 +0000 Received: from sea.source.kernel.org (sea.source.kernel.org [172.234.252.31]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 9f577fea-7130-11f0-a321-13f23c93f187; Mon, 04 Aug 2025 14:43:30 +0200 (CEST) Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sea.source.kernel.org (Postfix) with ESMTP id 28A9644938; Mon, 4 Aug 2025 12:43:29 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 96668C4CEF8; Mon, 4 Aug 2025 12:43:27 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 9f577fea-7130-11f0-a321-13f23c93f187 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1754311409; bh=B6ggJVOQfTYMCOE/nJsgZM614saTj7Qtn55LwfqKBMA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=mACwCJwnciJCEbZnCrkOPhfxhxsK8nNGbFRTcuF/UprHjNhigkEBz4Qjo6q3nOYnE NNEuUiRwpxiJJ+5+lJPFbBh1pz1NHdaolrAAyyFd/BDEth7WPdBMjVzrlmdIaxA3Qb 2RsWBm5LtpycB6Eoe6pJbVmXbTVoQjyF0BJrKzZPE5F7UARnqwUUZNfC3023xQAdCk GMv+SVJ7lY8MSINtMTSQNgemwlQgadSRLsUt+7E2s6eroO/rhcn1uAVTTMaXvcyAt7 AFjFAFTQV2Ju8uDUJFNXpQWgxbDaQrFpkFb9ltCXITe4mMxj1t62jvARy38KRhJWAk x4Lll22SY+E5Q== From: Leon Romanovsky To: Marek Szyprowski Cc: Leon Romanovsky , Jason Gunthorpe , Abdiel Janulgue , Alexander Potapenko , Alex Gaynor , Andrew Morton , Christoph Hellwig , Danilo Krummrich , iommu@lists.linux.dev, Jason Wang , Jens Axboe , Joerg Roedel , Jonathan Corbet , Juergen Gross , kasan-dev@googlegroups.com, Keith Busch , linux-block@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-nvme@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-trace-kernel@vger.kernel.org, Madhavan Srinivasan , Masami Hiramatsu , Michael Ellerman , "Michael S. Tsirkin" , Miguel Ojeda , Robin Murphy , rust-for-linux@vger.kernel.org, Sagi Grimberg , Stefano Stabellini , Steven Rostedt , virtualization@lists.linux.dev, Will Deacon , xen-devel@lists.xenproject.org Subject: [PATCH v1 01/16] dma-mapping: introduce new DMA attribute to indicate MMIO memory Date: Mon, 4 Aug 2025 15:42:35 +0300 Message-ID: X-Mailer: git-send-email 2.50.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @kernel.org) X-ZM-MESSAGEID: 1754311429667124100 Content-Type: text/plain; charset="utf-8" From: Leon Romanovsky This patch introduces the DMA_ATTR_MMIO attribute to mark DMA buffers that reside in memory-mapped I/O (MMIO) regions, such as device BARs exposed through the host bridge, which are accessible for peer-to-peer (P2P) DMA. This attribute is especially useful for exporting device memory to other devices for DMA without CPU involvement, and avoids unnecessary or potentially detrimental CPU cache maintenance calls. Signed-off-by: Leon Romanovsky --- Documentation/core-api/dma-attributes.rst | 7 +++++++ include/linux/dma-mapping.h | 14 ++++++++++++++ include/trace/events/dma.h | 3 ++- rust/kernel/dma.rs | 3 +++ 4 files changed, 26 insertions(+), 1 deletion(-) diff --git a/Documentation/core-api/dma-attributes.rst b/Documentation/core= -api/dma-attributes.rst index 1887d92e8e926..91acd2684e506 100644 --- a/Documentation/core-api/dma-attributes.rst +++ b/Documentation/core-api/dma-attributes.rst @@ -130,3 +130,10 @@ accesses to DMA buffers in both privileged "supervisor= " and unprivileged subsystem that the buffer is fully accessible at the elevated privilege level (and ideally inaccessible or at least read-only at the lesser-privileged levels). + +DMA_ATTR_MMIO +------------- + +This attribute is especially useful for exporting device memory to other +devices for DMA without CPU involvement, and avoids unnecessary or +potentially detrimental CPU cache maintenance calls. diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h index 55c03e5fe8cb3..afc89835c7457 100644 --- a/include/linux/dma-mapping.h +++ b/include/linux/dma-mapping.h @@ -58,6 +58,20 @@ */ #define DMA_ATTR_PRIVILEGED (1UL << 9) =20 +/* + * DMA_ATTR_MMIO - Indicates memory-mapped I/O (MMIO) region for DMA mappi= ng + * + * This attribute is used for MMIO memory regions that are exposed through + * the host bridge and are accessible for peer-to-peer (P2P) DMA. Memory + * marked with this attribute is not system RAM and may represent device + * BAR windows or peer-exposed memory. + * + * Typical usage is for mapping hardware memory BARs or exporting device + * memory to other devices for DMA without involving main system RAM. + * The attribute guarantees no CPU cache maintenance calls will be made. + */ +#define DMA_ATTR_MMIO (1UL << 10) + /* * A dma_addr_t can hold any valid DMA or bus address for the platform. I= t can * be given to a device to use as a DMA source or target. It is specific = to a diff --git a/include/trace/events/dma.h b/include/trace/events/dma.h index d8ddc27b6a7c8..ee90d6f1dcf35 100644 --- a/include/trace/events/dma.h +++ b/include/trace/events/dma.h @@ -31,7 +31,8 @@ TRACE_DEFINE_ENUM(DMA_NONE); { DMA_ATTR_FORCE_CONTIGUOUS, "FORCE_CONTIGUOUS" }, \ { DMA_ATTR_ALLOC_SINGLE_PAGES, "ALLOC_SINGLE_PAGES" }, \ { DMA_ATTR_NO_WARN, "NO_WARN" }, \ - { DMA_ATTR_PRIVILEGED, "PRIVILEGED" }) + { DMA_ATTR_PRIVILEGED, "PRIVILEGED" }, \ + { DMA_ATTR_MMIO, "MMIO" }) =20 DECLARE_EVENT_CLASS(dma_map, TP_PROTO(struct device *dev, phys_addr_t phys_addr, dma_addr_t dma_addr, diff --git a/rust/kernel/dma.rs b/rust/kernel/dma.rs index 2bc8ab51ec280..61d9eed7a786e 100644 --- a/rust/kernel/dma.rs +++ b/rust/kernel/dma.rs @@ -242,6 +242,9 @@ pub mod attrs { /// Indicates that the buffer is fully accessible at an elevated privi= lege level (and /// ideally inaccessible or at least read-only at lesser-privileged le= vels). pub const DMA_ATTR_PRIVILEGED: Attrs =3D Attrs(bindings::DMA_ATTR_PRIV= ILEGED); + + /// Indicates that the buffer is MMIO memory. + pub const DMA_ATTR_MMIO: Attrs =3D Attrs(bindings::DMA_ATTR_MMIO); } =20 /// An abstraction of the `dma_alloc_coherent` API. --=20 2.50.1 From nobody Sun Oct 5 12:46:17 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=quarantine dis=none) header.from=kernel.org ARC-Seal: i=1; a=rsa-sha256; t=1754311418; cv=none; d=zohomail.com; s=zohoarc; b=eaxW27vwUT4kh121XprDSdavohLwPECGYszmZi1SZY+EiLegekhLyX8PtxyzU2NcSjUv3BM30xGx71X/HMlF26s0x8daAH0PEN8r+6Gwyfv2Ee1S20TSLhM4ghChQ5YoSixfcLVB0qxUHlRhKvuUfHLyj3MkFky/Oh+meyufLak= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1754311418; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=4AS9Yc3IoSR3Vqk1MHpbcTKCwde8JIMHurGCZB+n6Y8=; b=dKBvd4FE3xS/lWiDDkM6GKsvzGD5CkNkj/y21Lp6fB62TpJfl40/OMOAIDEZqLWW0cy3MsmXriOW9/p1O+0d7aiAQZHasapNC2IPPgcysx4koWbz9Sv+Xjrx8hzWtZkdG2+SOhnudNXb/pNNC1iFJEvNThBU3msScTsNnHedTEc= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=quarantine dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1754311418562461.8656237421869; Mon, 4 Aug 2025 05:43:38 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.1069163.1432995 (Exim 4.92) (envelope-from ) id 1uiuXD-0006iu-CN; Mon, 04 Aug 2025 12:43:23 +0000 Received: by outflank-mailman (output) from mailman id 1069163.1432995; Mon, 04 Aug 2025 12:43:23 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1uiuXD-0006in-9h; Mon, 04 Aug 2025 12:43:23 +0000 Received: by outflank-mailman (input) for mailman id 1069163; Mon, 04 Aug 2025 12:43:21 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1uiuXB-0006VD-NW for xen-devel@lists.xenproject.org; Mon, 04 Aug 2025 12:43:21 +0000 Received: from sea.source.kernel.org (sea.source.kernel.org [2600:3c0a:e001:78e:0:1991:8:25]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 99425443-7130-11f0-a321-13f23c93f187; Mon, 04 Aug 2025 14:43:20 +0200 (CEST) Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sea.source.kernel.org (Postfix) with ESMTP id 56FE04423C; Mon, 4 Aug 2025 12:43:18 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 4B783C4CEF0; Mon, 4 Aug 2025 12:43:17 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 99425443-7130-11f0-a321-13f23c93f187 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1754311398; bh=rlkoLHmieI8GnOBig0uhL3NMd0vvRWDvv2oEih0mzFM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=QrFIfUaOpq0zgYoSf2B1eOFHhzxa8uJ9rI7ao3LQ15JdOAmXwx/vS3c+eYH6QTWOC CZmPIb28BNkyQUhEkXmvY3ou31ivX2Y6W+QyRQ5lEcadqA118EYMjd4CMO32TmndIt q9x/mE/Grc0mkPr/feAp0D8366SSjD6FCJuMLM/iEpxRFuKKQJHqb6OZAv8QT4n2gA l8UYQWJxtJiMLDORYH+k0vFOc3W5LIKBdC/UjGKILIXxW7YyF57t2tVDzjmI4Hs2SY Y6Y7brOxqet5SSMB18NQiaM/FfQfmONk81U59gW5ymq8VCTWf5/3zTUjdvPe68z/B6 YFqU3TERLKLDQ== From: Leon Romanovsky To: Marek Szyprowski Cc: Leon Romanovsky , Jason Gunthorpe , Abdiel Janulgue , Alexander Potapenko , Alex Gaynor , Andrew Morton , Christoph Hellwig , Danilo Krummrich , iommu@lists.linux.dev, Jason Wang , Jens Axboe , Joerg Roedel , Jonathan Corbet , Juergen Gross , kasan-dev@googlegroups.com, Keith Busch , linux-block@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-nvme@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-trace-kernel@vger.kernel.org, Madhavan Srinivasan , Masami Hiramatsu , Michael Ellerman , "Michael S. Tsirkin" , Miguel Ojeda , Robin Murphy , rust-for-linux@vger.kernel.org, Sagi Grimberg , Stefano Stabellini , Steven Rostedt , virtualization@lists.linux.dev, Will Deacon , xen-devel@lists.xenproject.org Subject: [PATCH v1 02/16] iommu/dma: handle MMIO path in dma_iova_link Date: Mon, 4 Aug 2025 15:42:36 +0300 Message-ID: <52e39cd31d8f30e54a27afac84ea35f45ae4e422.1754292567.git.leon@kernel.org> X-Mailer: git-send-email 2.50.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @kernel.org) X-ZM-MESSAGEID: 1754311428633124100 Content-Type: text/plain; charset="utf-8" From: Leon Romanovsky Make sure that CPU is not synced if MMIO path is taken. Signed-off-by: Leon Romanovsky Reviewed-by: Jason Gunthorpe --- drivers/iommu/dma-iommu.c | 21 ++++++++++++++++----- 1 file changed, 16 insertions(+), 5 deletions(-) diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index ea2ef53bd4fef..399838c17b705 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -1837,13 +1837,20 @@ static int __dma_iova_link(struct device *dev, dma_= addr_t addr, phys_addr_t phys, size_t size, enum dma_data_direction dir, unsigned long attrs) { - bool coherent =3D dev_is_dma_coherent(dev); + int prot; =20 - if (!coherent && !(attrs & DMA_ATTR_SKIP_CPU_SYNC)) - arch_sync_dma_for_device(phys, size, dir); + if (attrs & DMA_ATTR_MMIO) + prot =3D dma_info_to_prot(dir, false, attrs) | IOMMU_MMIO; + else { + bool coherent =3D dev_is_dma_coherent(dev); + + if (!coherent && !(attrs & DMA_ATTR_SKIP_CPU_SYNC)) + arch_sync_dma_for_device(phys, size, dir); + prot =3D dma_info_to_prot(dir, coherent, attrs); + } =20 return iommu_map_nosync(iommu_get_dma_domain(dev), addr, phys, size, - dma_info_to_prot(dir, coherent, attrs), GFP_ATOMIC); + prot, GFP_ATOMIC); } =20 static int iommu_dma_iova_bounce_and_link(struct device *dev, dma_addr_t a= ddr, @@ -1949,9 +1956,13 @@ int dma_iova_link(struct device *dev, struct dma_iov= a_state *state, return -EIO; =20 if (dev_use_swiotlb(dev, size, dir) && - iova_unaligned(iovad, phys, size)) + iova_unaligned(iovad, phys, size)) { + if (attrs & DMA_ATTR_MMIO) + return -EPERM; + return iommu_dma_iova_link_swiotlb(dev, state, phys, offset, size, dir, attrs); + } =20 return __dma_iova_link(dev, state->addr + offset - iova_start_pad, phys - iova_start_pad, --=20 2.50.1 From nobody Sun Oct 5 12:46:17 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=quarantine dis=none) header.from=kernel.org ARC-Seal: i=1; a=rsa-sha256; t=1754311422; cv=none; d=zohomail.com; s=zohoarc; b=FqYnSVXR7tiB9aiO9gxPiw5CAQJ5DK85PPDWNzxoUbHNnWJTuADBeA2sxGaS3je1T8EY+gFwNQg5fLlBnOy+gDvbyQ+e28JBjz4uJbDpSe+Mov69opYzFND6VHn0ItZQ9Sk6GOK7EvQsYBZk94rsMNBUkEfdRbGTW6zse+Wbi90= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1754311422; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=Y9GAfGjuophEQ7ZKuR3iSLNdSI5TrmwdTZOn2v2V0kg=; b=NRZ0bwyIrvmma5EaHVDbSQjTJzwgPpFRJvAzEEkeJMF7PSb+Yjwn+m3UiO/srheRGiknb9rpSo2S7GYHwvJSd3S7s4g6UyOF0GnZMdOs2VjgkAfe8hISmRBAXaHq2Kn02+vj3rNOPyn9mGQiCWgwVojmdBUuT0nFI4OeBpn+vSE= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=quarantine dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1754311422846486.61106627727725; Mon, 4 Aug 2025 05:43:42 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.1069164.1433006 (Exim 4.92) (envelope-from ) id 1uiuXI-0006zd-Ka; Mon, 04 Aug 2025 12:43:28 +0000 Received: by outflank-mailman (output) from mailman id 1069164.1433006; Mon, 04 Aug 2025 12:43:28 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1uiuXI-0006zW-Gf; Mon, 04 Aug 2025 12:43:28 +0000 Received: by outflank-mailman (input) for mailman id 1069164; Mon, 04 Aug 2025 12:43:26 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1uiuXG-0006V7-Rb for xen-devel@lists.xenproject.org; Mon, 04 Aug 2025 12:43:26 +0000 Received: from sea.source.kernel.org (sea.source.kernel.org [172.234.252.31]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 9bc209eb-7130-11f0-b898-0df219b8e170; Mon, 04 Aug 2025 14:43:24 +0200 (CEST) Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sea.source.kernel.org (Postfix) with ESMTP id 2379744938; Mon, 4 Aug 2025 12:43:23 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id E414EC4CEE7; Mon, 4 Aug 2025 12:43:21 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 9bc209eb-7130-11f0-b898-0df219b8e170 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1754311403; bh=G8yxKVCRAfrQOyuBUvZbEVDf24fip2a2X5oNwMglhAg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=FLzGyKpimqZsvNAMkhEPYN2tkYm9wQ3h3UDHZHmuFev7Kdqx8jKIWoT3U85iOoM1k aCTeOR/y2YoJkjQmGkC8nd0XlPwXp9ABDdnY25JZZV+pjTwrGx1BPI6jkGtfuWH3lH cNTOlfe37YMJ7EbolofEK6cSqjVoJzRKIgDX9Rw/r/pSkJ6tgXx0mQMDPbB50Eo/tY bSHgA0aJulzKtznrvuSvmEPJIYl7Z15YnjALIGKQEd37p0dEVK+inYE3ISZhcfRZD9 duEz6OdVIUPZ2RcrkaozjegB0lDsbcAcuJNUcsrkRMd4xE8ofRwae99SaFKf8HjV7j 9ur3CJw3QLkfA== From: Leon Romanovsky To: Marek Szyprowski Cc: Leon Romanovsky , Jason Gunthorpe , Abdiel Janulgue , Alexander Potapenko , Alex Gaynor , Andrew Morton , Christoph Hellwig , Danilo Krummrich , iommu@lists.linux.dev, Jason Wang , Jens Axboe , Joerg Roedel , Jonathan Corbet , Juergen Gross , kasan-dev@googlegroups.com, Keith Busch , linux-block@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-nvme@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-trace-kernel@vger.kernel.org, Madhavan Srinivasan , Masami Hiramatsu , Michael Ellerman , "Michael S. Tsirkin" , Miguel Ojeda , Robin Murphy , rust-for-linux@vger.kernel.org, Sagi Grimberg , Stefano Stabellini , Steven Rostedt , virtualization@lists.linux.dev, Will Deacon , xen-devel@lists.xenproject.org Subject: [PATCH v1 03/16] dma-debug: refactor to use physical addresses for page mapping Date: Mon, 4 Aug 2025 15:42:37 +0300 Message-ID: <9ba84c387ce67389cd80f374408eebb58326c448.1754292567.git.leon@kernel.org> X-Mailer: git-send-email 2.50.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @kernel.org) X-ZM-MESSAGEID: 1754311424571116600 Content-Type: text/plain; charset="utf-8" From: Leon Romanovsky Convert the DMA debug infrastructure from page-based to physical address-ba= sed mapping as a preparation to rely on physical address for DMA mapping routin= es. The refactoring renames debug_dma_map_page() to debug_dma_map_phys() and changes its signature to accept a phys_addr_t parameter instead of struct p= age and offset. Similarly, debug_dma_unmap_page() becomes debug_dma_unmap_phys(= ). A new dma_debug_phy type is introduced to distinguish physical address mapp= ings from other debug entry types. All callers throughout the codebase are updat= ed to pass physical addresses directly, eliminating the need for page-to-physi= cal conversion in the debug layer. This refactoring eliminates the need to convert between page pointers and physical addresses in the debug layer, making the code more efficient and consistent with the DMA mapping API's physical address focus. Signed-off-by: Leon Romanovsky Reviewed-by: Jason Gunthorpe --- Documentation/core-api/dma-api.rst | 4 ++-- kernel/dma/debug.c | 28 +++++++++++++++++----------- kernel/dma/debug.h | 16 +++++++--------- kernel/dma/mapping.c | 15 ++++++++------- 4 files changed, 34 insertions(+), 29 deletions(-) diff --git a/Documentation/core-api/dma-api.rst b/Documentation/core-api/dm= a-api.rst index 3087bea715ed2..ca75b35416792 100644 --- a/Documentation/core-api/dma-api.rst +++ b/Documentation/core-api/dma-api.rst @@ -761,7 +761,7 @@ example warning message may look like this:: [] find_busiest_group+0x207/0x8a0 [] _spin_lock_irqsave+0x1f/0x50 [] check_unmap+0x203/0x490 - [] debug_dma_unmap_page+0x49/0x50 + [] debug_dma_unmap_phys+0x49/0x50 [] nv_tx_done_optimized+0xc6/0x2c0 [] nv_nic_irq_optimized+0x73/0x2b0 [] handle_IRQ_event+0x34/0x70 @@ -855,7 +855,7 @@ that a driver may be leaking mappings. dma-debug interface debug_dma_mapping_error() to debug drivers that fail to check DMA mapping errors on addresses returned by dma_map_single() and dma_map_page() interfaces. This interface clears a flag set by -debug_dma_map_page() to indicate that dma_mapping_error() has been called = by +debug_dma_map_phys() to indicate that dma_mapping_error() has been called = by the driver. When driver does unmap, debug_dma_unmap() checks the flag and = if this flag is still set, prints warning message that includes call trace th= at leads up to the unmap. This interface can be called from dma_mapping_error= () diff --git a/kernel/dma/debug.c b/kernel/dma/debug.c index e43c6de2bce4e..da6734e3a4ce9 100644 --- a/kernel/dma/debug.c +++ b/kernel/dma/debug.c @@ -39,6 +39,7 @@ enum { dma_debug_sg, dma_debug_coherent, dma_debug_resource, + dma_debug_phy, }; =20 enum map_err_types { @@ -141,6 +142,7 @@ static const char *type2name[] =3D { [dma_debug_sg] =3D "scatter-gather", [dma_debug_coherent] =3D "coherent", [dma_debug_resource] =3D "resource", + [dma_debug_phy] =3D "phy", }; =20 static const char *dir2name[] =3D { @@ -1201,9 +1203,8 @@ void debug_dma_map_single(struct device *dev, const v= oid *addr, } EXPORT_SYMBOL(debug_dma_map_single); =20 -void debug_dma_map_page(struct device *dev, struct page *page, size_t offs= et, - size_t size, int direction, dma_addr_t dma_addr, - unsigned long attrs) +void debug_dma_map_phys(struct device *dev, phys_addr_t phys, size_t size, + int direction, dma_addr_t dma_addr, unsigned long attrs) { struct dma_debug_entry *entry; =20 @@ -1218,19 +1219,24 @@ void debug_dma_map_page(struct device *dev, struct = page *page, size_t offset, return; =20 entry->dev =3D dev; - entry->type =3D dma_debug_single; - entry->paddr =3D page_to_phys(page) + offset; + entry->type =3D dma_debug_phy; + entry->paddr =3D phys; entry->dev_addr =3D dma_addr; entry->size =3D size; entry->direction =3D direction; entry->map_err_type =3D MAP_ERR_NOT_CHECKED; =20 - check_for_stack(dev, page, offset); + if (!(attrs & DMA_ATTR_MMIO)) { + struct page *page =3D phys_to_page(phys); + size_t offset =3D offset_in_page(page); =20 - if (!PageHighMem(page)) { - void *addr =3D page_address(page) + offset; + check_for_stack(dev, page, offset); =20 - check_for_illegal_area(dev, addr, size); + if (!PageHighMem(page)) { + void *addr =3D page_address(page) + offset; + + check_for_illegal_area(dev, addr, size); + } } =20 add_dma_entry(entry, attrs); @@ -1274,11 +1280,11 @@ void debug_dma_mapping_error(struct device *dev, dm= a_addr_t dma_addr) } EXPORT_SYMBOL(debug_dma_mapping_error); =20 -void debug_dma_unmap_page(struct device *dev, dma_addr_t dma_addr, +void debug_dma_unmap_phys(struct device *dev, dma_addr_t dma_addr, size_t size, int direction) { struct dma_debug_entry ref =3D { - .type =3D dma_debug_single, + .type =3D dma_debug_phy, .dev =3D dev, .dev_addr =3D dma_addr, .size =3D size, diff --git a/kernel/dma/debug.h b/kernel/dma/debug.h index f525197d3cae6..76adb42bffd5f 100644 --- a/kernel/dma/debug.h +++ b/kernel/dma/debug.h @@ -9,12 +9,11 @@ #define _KERNEL_DMA_DEBUG_H =20 #ifdef CONFIG_DMA_API_DEBUG -extern void debug_dma_map_page(struct device *dev, struct page *page, - size_t offset, size_t size, - int direction, dma_addr_t dma_addr, +extern void debug_dma_map_phys(struct device *dev, phys_addr_t phys, + size_t size, int direction, dma_addr_t dma_addr, unsigned long attrs); =20 -extern void debug_dma_unmap_page(struct device *dev, dma_addr_t addr, +extern void debug_dma_unmap_phys(struct device *dev, dma_addr_t addr, size_t size, int direction); =20 extern void debug_dma_map_sg(struct device *dev, struct scatterlist *sg, @@ -55,14 +54,13 @@ extern void debug_dma_sync_sg_for_device(struct device = *dev, struct scatterlist *sg, int nelems, int direction); #else /* CONFIG_DMA_API_DEBUG */ -static inline void debug_dma_map_page(struct device *dev, struct page *pag= e, - size_t offset, size_t size, - int direction, dma_addr_t dma_addr, - unsigned long attrs) +static inline void debug_dma_map_phys(struct device *dev, phys_addr_t phys, + size_t size, int direction, + dma_addr_t dma_addr, unsigned long attrs) { } =20 -static inline void debug_dma_unmap_page(struct device *dev, dma_addr_t add= r, +static inline void debug_dma_unmap_phys(struct device *dev, dma_addr_t add= r, size_t size, int direction) { } diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c index 107e4a4d251df..4c1dfbabb8ae5 100644 --- a/kernel/dma/mapping.c +++ b/kernel/dma/mapping.c @@ -157,6 +157,7 @@ dma_addr_t dma_map_page_attrs(struct device *dev, struc= t page *page, unsigned long attrs) { const struct dma_map_ops *ops =3D get_dma_ops(dev); + phys_addr_t phys =3D page_to_phys(page) + offset; dma_addr_t addr; =20 BUG_ON(!valid_dma_direction(dir)); @@ -165,16 +166,15 @@ dma_addr_t dma_map_page_attrs(struct device *dev, str= uct page *page, return DMA_MAPPING_ERROR; =20 if (dma_map_direct(dev, ops) || - arch_dma_map_page_direct(dev, page_to_phys(page) + offset + size)) + arch_dma_map_page_direct(dev, phys + size)) addr =3D dma_direct_map_page(dev, page, offset, size, dir, attrs); else if (use_dma_iommu(dev)) addr =3D iommu_dma_map_page(dev, page, offset, size, dir, attrs); else addr =3D ops->map_page(dev, page, offset, size, dir, attrs); kmsan_handle_dma(page, offset, size, dir); - trace_dma_map_page(dev, page_to_phys(page) + offset, addr, size, dir, - attrs); - debug_dma_map_page(dev, page, offset, size, dir, addr, attrs); + trace_dma_map_page(dev, phys, addr, size, dir, attrs); + debug_dma_map_phys(dev, phys, size, dir, addr, attrs); =20 return addr; } @@ -194,7 +194,7 @@ void dma_unmap_page_attrs(struct device *dev, dma_addr_= t addr, size_t size, else ops->unmap_page(dev, addr, size, dir, attrs); trace_dma_unmap_page(dev, addr, size, dir, attrs); - debug_dma_unmap_page(dev, addr, size, dir); + debug_dma_unmap_phys(dev, addr, size, dir); } EXPORT_SYMBOL(dma_unmap_page_attrs); =20 @@ -712,7 +712,8 @@ struct page *dma_alloc_pages(struct device *dev, size_t= size, if (page) { trace_dma_alloc_pages(dev, page_to_virt(page), *dma_handle, size, dir, gfp, 0); - debug_dma_map_page(dev, page, 0, size, dir, *dma_handle, 0); + debug_dma_map_phys(dev, page_to_phys(page), size, dir, + *dma_handle, 0); } else { trace_dma_alloc_pages(dev, NULL, 0, size, dir, gfp, 0); } @@ -738,7 +739,7 @@ void dma_free_pages(struct device *dev, size_t size, st= ruct page *page, dma_addr_t dma_handle, enum dma_data_direction dir) { trace_dma_free_pages(dev, page_to_virt(page), dma_handle, size, dir, 0); - debug_dma_unmap_page(dev, dma_handle, size, dir); + debug_dma_unmap_phys(dev, dma_handle, size, dir); __dma_free_pages(dev, size, page, dma_handle, dir); } EXPORT_SYMBOL_GPL(dma_free_pages); --=20 2.50.1 From nobody Sun Oct 5 12:46:17 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=quarantine dis=none) header.from=kernel.org ARC-Seal: i=1; a=rsa-sha256; t=1754311430; cv=none; d=zohomail.com; s=zohoarc; b=KX20+MEPZkEs64cdIU8pTJUpae5TzE69tO7jvDz6RJ9HKhMUTGQkb739yuZ32z74yMWNrYRp2Zc/2XAvc1pOKz10I//OJB+rTzLwy9GUCOo8015wKeGmUkdL10LzquzjsWRvDu4lBoPJ6S4n2INXuOVhphg/fUU21/b5m4h4JUE= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1754311430; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=MLLzwi1yAXsHc08Aygf0gJ2rcKhVnpp9gk2ta6mt2yU=; b=WbCjM4RtSUKCQ13Y+XhNTzSJzqGrmmHPERDQAM9wqYxYXqSxZDoLFU4o/6DoJXA/UTWOqdKLYW4pCdUJfJTI0gtR6r+iSlgJZP2LqUJ8K9GWt59KnUyzgMkRxm+9M5zBNLs5H9CRBK7hvp1Ghr/An336zVnqv8DP9y4eDTv8sJY= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=quarantine dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1754311430435557.877847904715; Mon, 4 Aug 2025 05:43:50 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.1069168.1433026 (Exim 4.92) (envelope-from ) id 1uiuXT-0007il-Aj; Mon, 04 Aug 2025 12:43:39 +0000 Received: by outflank-mailman (output) from mailman id 1069168.1433026; Mon, 04 Aug 2025 12:43:39 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1uiuXT-0007ie-6s; Mon, 04 Aug 2025 12:43:39 +0000 Received: by outflank-mailman (input) for mailman id 1069168; Mon, 04 Aug 2025 12:43:37 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1uiuXR-0006V7-Nh for xen-devel@lists.xenproject.org; Mon, 04 Aug 2025 12:43:37 +0000 Received: from nyc.source.kernel.org (nyc.source.kernel.org [147.75.193.91]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id a3062ad6-7130-11f0-b898-0df219b8e170; Mon, 04 Aug 2025 14:43:36 +0200 (CEST) Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by nyc.source.kernel.org (Postfix) with ESMTP id 46869A55826; Mon, 4 Aug 2025 12:43:35 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7673FC4CEE7; Mon, 4 Aug 2025 12:43:33 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: a3062ad6-7130-11f0-b898-0df219b8e170 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1754311414; bh=gXQfTddyrmH8Ixen8MMOQFcSDtf75ytBbFmEnbpkIK0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=WRgsvRlFW0e77jc7NIsyBW2keN26coVZrkVCiSkqs4aw+XPWcg7vc+V1XlKoIcE88 IOIbPQelgEDLheM/qg7mVhJZzAqXm7CUjMAMKNzC+umg0g7DwYC9YBLyHXL0I66FCh IFbMO71xVEUgxdJl2+IupUgaBpdkpCp+VfiDgWnzz0MMd73xSf4ctXSScspzbf7J4C MQm70SnKxDBQEq6Qi5EQDhrkAfT+WwzTZQfsthVUSLn1pLEnK9pcRGmBAGcQOXI3DR pSMdvMXvMDulNitg5dtAwYyxJ7hDxcGMKpGt6kEpCAlccx1SJGPya+SDEHg8s11i4i u8QgrR2e/DTYQ== From: Leon Romanovsky To: Marek Szyprowski Cc: Leon Romanovsky , Jason Gunthorpe , Abdiel Janulgue , Alexander Potapenko , Alex Gaynor , Andrew Morton , Christoph Hellwig , Danilo Krummrich , iommu@lists.linux.dev, Jason Wang , Jens Axboe , Joerg Roedel , Jonathan Corbet , Juergen Gross , kasan-dev@googlegroups.com, Keith Busch , linux-block@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-nvme@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-trace-kernel@vger.kernel.org, Madhavan Srinivasan , Masami Hiramatsu , Michael Ellerman , "Michael S. Tsirkin" , Miguel Ojeda , Robin Murphy , rust-for-linux@vger.kernel.org, Sagi Grimberg , Stefano Stabellini , Steven Rostedt , virtualization@lists.linux.dev, Will Deacon , xen-devel@lists.xenproject.org Subject: [PATCH v1 04/16] dma-mapping: rename trace_dma_*map_page to trace_dma_*map_phys Date: Mon, 4 Aug 2025 15:42:38 +0300 Message-ID: <7e10dcba2f3108efc6af13bfdbe8f09073835838.1754292567.git.leon@kernel.org> X-Mailer: git-send-email 2.50.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @kernel.org) X-ZM-MESSAGEID: 1754311431636124100 Content-Type: text/plain; charset="utf-8" From: Leon Romanovsky As a preparation for following map_page -> map_phys API conversion, let's rename trace_dma_*map_page() to be trace_dma_*map_phys(). Signed-off-by: Leon Romanovsky --- include/trace/events/dma.h | 4 ++-- kernel/dma/mapping.c | 4 ++-- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/include/trace/events/dma.h b/include/trace/events/dma.h index ee90d6f1dcf35..84416c7d6bfaa 100644 --- a/include/trace/events/dma.h +++ b/include/trace/events/dma.h @@ -72,7 +72,7 @@ DEFINE_EVENT(dma_map, name, \ size_t size, enum dma_data_direction dir, unsigned long attrs), \ TP_ARGS(dev, phys_addr, dma_addr, size, dir, attrs)) =20 -DEFINE_MAP_EVENT(dma_map_page); +DEFINE_MAP_EVENT(dma_map_phys); DEFINE_MAP_EVENT(dma_map_resource); =20 DECLARE_EVENT_CLASS(dma_unmap, @@ -110,7 +110,7 @@ DEFINE_EVENT(dma_unmap, name, \ enum dma_data_direction dir, unsigned long attrs), \ TP_ARGS(dev, addr, size, dir, attrs)) =20 -DEFINE_UNMAP_EVENT(dma_unmap_page); +DEFINE_UNMAP_EVENT(dma_unmap_phys); DEFINE_UNMAP_EVENT(dma_unmap_resource); =20 DECLARE_EVENT_CLASS(dma_alloc_class, diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c index 4c1dfbabb8ae5..fe1f0da6dc507 100644 --- a/kernel/dma/mapping.c +++ b/kernel/dma/mapping.c @@ -173,7 +173,7 @@ dma_addr_t dma_map_page_attrs(struct device *dev, struc= t page *page, else addr =3D ops->map_page(dev, page, offset, size, dir, attrs); kmsan_handle_dma(page, offset, size, dir); - trace_dma_map_page(dev, phys, addr, size, dir, attrs); + trace_dma_map_phys(dev, phys, addr, size, dir, attrs); debug_dma_map_phys(dev, phys, size, dir, addr, attrs); =20 return addr; @@ -193,7 +193,7 @@ void dma_unmap_page_attrs(struct device *dev, dma_addr_= t addr, size_t size, iommu_dma_unmap_page(dev, addr, size, dir, attrs); else ops->unmap_page(dev, addr, size, dir, attrs); - trace_dma_unmap_page(dev, addr, size, dir, attrs); + trace_dma_unmap_phys(dev, addr, size, dir, attrs); debug_dma_unmap_phys(dev, addr, size, dir); } EXPORT_SYMBOL(dma_unmap_page_attrs); --=20 2.50.1 From nobody Sun Oct 5 12:46:17 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=quarantine dis=none) header.from=kernel.org ARC-Seal: i=1; a=rsa-sha256; t=1754311904; cv=none; d=zohomail.com; s=zohoarc; b=Kh4J4g0gytxhqO2Du0jeynniRKT57iQ4pwb2IbTkxV8CA8COkDg6FOwE2/Pix766u5P3hY0tbtbBQ8aP4kLzpn/EVDMfi6+9h/wA5qdw67KFUQoE7784iJwL9mPL76ZSqr3zihvP/gduBQnTYTueO/ZHqcGMrPg1MRKVwNVwyDA= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1754311904; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=aUGRZ8OiFKTYz3bE7TMZcKc2yGhFtaAdl3sZst6yCGU=; b=UIdVkhaJYW21SHaNwQ6V/CYjH14JIgjACEbBXeQVaS+VHm8ndY9lGKwEo7Dw1fb1ldVke3uWNb8qVvZAiHX8g8Ce1va5g4LrUD+nxRN3d49cWKfCuMYbcTATyEHuCRDiug7hP0P+bSDojoxeCgZv7XMxQHEDP0SiXcCom1Nt6yI= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=quarantine dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 175431190427026.018552380402525; Mon, 4 Aug 2025 05:51:44 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.1069221.1433100 (Exim 4.92) (envelope-from ) id 1uiuf3-0004Vc-3A; Mon, 04 Aug 2025 12:51:29 +0000 Received: by outflank-mailman (output) from mailman id 1069221.1433100; Mon, 04 Aug 2025 12:51:29 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1uiuf2-0004VC-T9; Mon, 04 Aug 2025 12:51:28 +0000 Received: by outflank-mailman (input) for mailman id 1069221; Mon, 04 Aug 2025 12:51:27 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1uiuXg-0006VD-7A for xen-devel@lists.xenproject.org; Mon, 04 Aug 2025 12:43:52 +0000 Received: from nyc.source.kernel.org (nyc.source.kernel.org [147.75.193.91]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id abad2c6e-7130-11f0-a321-13f23c93f187; Mon, 04 Aug 2025 14:43:50 +0200 (CEST) Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by nyc.source.kernel.org (Postfix) with ESMTP id DB03EA55869; Mon, 4 Aug 2025 12:43:49 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 77632C4CEF8; Mon, 4 Aug 2025 12:43:48 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: abad2c6e-7130-11f0-a321-13f23c93f187 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1754311429; bh=fV/HgrK+DFqDLLsHNIEfHPO7ePKynetLmNvxMPUxMvM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Aa0MuxNBbcIy902+5pajDqLKZSF1w44g4nO478lMbvVmelIgJo9/XEVtOJdLolm32 q7aXDo/HichuVD7Pb+Aa/K/QU+jft9TqQt6qSowqpu9o7oVK33kF7h87bAMiCEuh7J GTG4UxRiY9ThlFKl1TYW6avleePvBUl4DfH/vEF1Kr9msK2hTjqPSEBQo2CPPDN9XL cmvQ+Y6RsY/GLJU2K2UCJxW/9snuDBWIgUSq51tLUxcQc4tM8wxjYyQbIJau+bNm5z oI+I/YZiPTOQKrn1/OVrXurN2c10xr6JyOZLXzeR/bxU8ANAxJ8GIadB+SKB840EpC wLychapzdplKg== From: Leon Romanovsky To: Marek Szyprowski Cc: Leon Romanovsky , Jason Gunthorpe , Abdiel Janulgue , Alexander Potapenko , Alex Gaynor , Andrew Morton , Christoph Hellwig , Danilo Krummrich , iommu@lists.linux.dev, Jason Wang , Jens Axboe , Joerg Roedel , Jonathan Corbet , Juergen Gross , kasan-dev@googlegroups.com, Keith Busch , linux-block@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-nvme@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-trace-kernel@vger.kernel.org, Madhavan Srinivasan , Masami Hiramatsu , Michael Ellerman , "Michael S. Tsirkin" , Miguel Ojeda , Robin Murphy , rust-for-linux@vger.kernel.org, Sagi Grimberg , Stefano Stabellini , Steven Rostedt , virtualization@lists.linux.dev, Will Deacon , xen-devel@lists.xenproject.org Subject: [PATCH v1 05/16] iommu/dma: rename iommu_dma_*map_page to iommu_dma_*map_phys Date: Mon, 4 Aug 2025 15:42:39 +0300 Message-ID: <9186ccefda5ea97b56ec006900127650f9e324b5.1754292567.git.leon@kernel.org> X-Mailer: git-send-email 2.50.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @kernel.org) X-ZM-MESSAGEID: 1754311905436116600 Content-Type: text/plain; charset="utf-8" From: Leon Romanovsky Rename the IOMMU DMA mapping functions to better reflect their actual calling convention. The functions iommu_dma_map_page() and iommu_dma_unmap_page() are renamed to iommu_dma_map_phys() and iommu_dma_unmap_phys() respectively, as they already operate on physical addresses rather than page structures. The calling convention changes from accepting (struct page *page, unsigned long offset) to (phys_addr_t phys), which eliminates the need for page-to-physical address conversion within the functions. This renaming prepares for the broader DMA API conversion from page-based to physical address-based mapping throughout the kernel. All callers are updated to pass physical addresses directly, including dma_map_page_attrs(), scatterlist mapping functions, and DMA page allocation helpers. The change simplifies the code by removing the page_to_phys() + offset calculation that was previously done inside the IOMMU functions. Signed-off-by: Leon Romanovsky --- drivers/iommu/dma-iommu.c | 14 ++++++-------- include/linux/iommu-dma.h | 7 +++---- kernel/dma/mapping.c | 4 ++-- kernel/dma/ops_helpers.c | 6 +++--- 4 files changed, 14 insertions(+), 17 deletions(-) diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index 399838c17b705..11c5d5f8c0981 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -1190,11 +1190,9 @@ static inline size_t iova_unaligned(struct iova_doma= in *iovad, phys_addr_t phys, return iova_offset(iovad, phys | size); } =20 -dma_addr_t iommu_dma_map_page(struct device *dev, struct page *page, - unsigned long offset, size_t size, enum dma_data_direction dir, - unsigned long attrs) +dma_addr_t iommu_dma_map_phys(struct device *dev, phys_addr_t phys, size_t= size, + enum dma_data_direction dir, unsigned long attrs) { - phys_addr_t phys =3D page_to_phys(page) + offset; bool coherent =3D dev_is_dma_coherent(dev); int prot =3D dma_info_to_prot(dir, coherent, attrs); struct iommu_domain *domain =3D iommu_get_dma_domain(dev); @@ -1222,7 +1220,7 @@ dma_addr_t iommu_dma_map_page(struct device *dev, str= uct page *page, return iova; } =20 -void iommu_dma_unmap_page(struct device *dev, dma_addr_t dma_handle, +void iommu_dma_unmap_phys(struct device *dev, dma_addr_t dma_handle, size_t size, enum dma_data_direction dir, unsigned long attrs) { struct iommu_domain *domain =3D iommu_get_dma_domain(dev); @@ -1341,7 +1339,7 @@ static void iommu_dma_unmap_sg_swiotlb(struct device = *dev, struct scatterlist *s int i; =20 for_each_sg(sg, s, nents, i) - iommu_dma_unmap_page(dev, sg_dma_address(s), + iommu_dma_unmap_phys(dev, sg_dma_address(s), sg_dma_len(s), dir, attrs); } =20 @@ -1354,8 +1352,8 @@ static int iommu_dma_map_sg_swiotlb(struct device *de= v, struct scatterlist *sg, sg_dma_mark_swiotlb(sg); =20 for_each_sg(sg, s, nents, i) { - sg_dma_address(s) =3D iommu_dma_map_page(dev, sg_page(s), - s->offset, s->length, dir, attrs); + sg_dma_address(s) =3D iommu_dma_map_phys(dev, sg_phys(s), + s->length, dir, attrs); if (sg_dma_address(s) =3D=3D DMA_MAPPING_ERROR) goto out_unmap; sg_dma_len(s) =3D s->length; diff --git a/include/linux/iommu-dma.h b/include/linux/iommu-dma.h index 508beaa44c39e..485bdffed9888 100644 --- a/include/linux/iommu-dma.h +++ b/include/linux/iommu-dma.h @@ -21,10 +21,9 @@ static inline bool use_dma_iommu(struct device *dev) } #endif /* CONFIG_IOMMU_DMA */ =20 -dma_addr_t iommu_dma_map_page(struct device *dev, struct page *page, - unsigned long offset, size_t size, enum dma_data_direction dir, - unsigned long attrs); -void iommu_dma_unmap_page(struct device *dev, dma_addr_t dma_handle, +dma_addr_t iommu_dma_map_phys(struct device *dev, phys_addr_t phys, size_t= size, + enum dma_data_direction dir, unsigned long attrs); +void iommu_dma_unmap_phys(struct device *dev, dma_addr_t dma_handle, size_t size, enum dma_data_direction dir, unsigned long attrs); int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg, int nents, enum dma_data_direction dir, unsigned long attrs); diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c index fe1f0da6dc507..58482536db9bb 100644 --- a/kernel/dma/mapping.c +++ b/kernel/dma/mapping.c @@ -169,7 +169,7 @@ dma_addr_t dma_map_page_attrs(struct device *dev, struc= t page *page, arch_dma_map_page_direct(dev, phys + size)) addr =3D dma_direct_map_page(dev, page, offset, size, dir, attrs); else if (use_dma_iommu(dev)) - addr =3D iommu_dma_map_page(dev, page, offset, size, dir, attrs); + addr =3D iommu_dma_map_phys(dev, phys, size, dir, attrs); else addr =3D ops->map_page(dev, page, offset, size, dir, attrs); kmsan_handle_dma(page, offset, size, dir); @@ -190,7 +190,7 @@ void dma_unmap_page_attrs(struct device *dev, dma_addr_= t addr, size_t size, arch_dma_unmap_page_direct(dev, addr + size)) dma_direct_unmap_page(dev, addr, size, dir, attrs); else if (use_dma_iommu(dev)) - iommu_dma_unmap_page(dev, addr, size, dir, attrs); + iommu_dma_unmap_phys(dev, addr, size, dir, attrs); else ops->unmap_page(dev, addr, size, dir, attrs); trace_dma_unmap_phys(dev, addr, size, dir, attrs); diff --git a/kernel/dma/ops_helpers.c b/kernel/dma/ops_helpers.c index 9afd569eadb96..6f9d604d9d406 100644 --- a/kernel/dma/ops_helpers.c +++ b/kernel/dma/ops_helpers.c @@ -72,8 +72,8 @@ struct page *dma_common_alloc_pages(struct device *dev, s= ize_t size, return NULL; =20 if (use_dma_iommu(dev)) - *dma_handle =3D iommu_dma_map_page(dev, page, 0, size, dir, - DMA_ATTR_SKIP_CPU_SYNC); + *dma_handle =3D iommu_dma_map_phys(dev, page_to_phys(page), size, + dir, DMA_ATTR_SKIP_CPU_SYNC); else *dma_handle =3D ops->map_page(dev, page, 0, size, dir, DMA_ATTR_SKIP_CPU_SYNC); @@ -92,7 +92,7 @@ void dma_common_free_pages(struct device *dev, size_t siz= e, struct page *page, const struct dma_map_ops *ops =3D get_dma_ops(dev); =20 if (use_dma_iommu(dev)) - iommu_dma_unmap_page(dev, dma_handle, size, dir, + iommu_dma_unmap_phys(dev, dma_handle, size, dir, DMA_ATTR_SKIP_CPU_SYNC); else if (ops->unmap_page) ops->unmap_page(dev, dma_handle, size, dir, --=20 2.50.1 From nobody Sun Oct 5 12:46:17 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=quarantine dis=none) header.from=kernel.org ARC-Seal: i=1; a=rsa-sha256; t=1754311437; cv=none; d=zohomail.com; s=zohoarc; b=f1WBaqoCfljOWOnmeY/5u/TK40L5YIBN5IT6DZXUkSrgKzpFUsO4k7sYxPuIi24ad1dPvwMFU77qAmobh2I/VxzqOZlNrnn3h+MNzUdeYWmJGC1cJhssaHfOxncDzm2eKNgQAPM33mkeNLmhPez030e1mokWnca8e7nHoCfesfE= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1754311437; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=Yob5vGImG9Fz6Q3IIoENTL3tAbucLWfbEG7h2+LdGFM=; b=HKCzcvfSH2CIwscj0WL1lp+SbakS3Jur4vwCpqaZfRFxM5rA+h5uyMCcZOF4eDuzFcAUge4QmeFWZS6zr6e/DF25cId0SXx33u+ch9HczQL9KHi0p2SqN1H458jXAQKgPGxGSJAL8k+kWPa42aZC+qHNEFp+PWfiCoyz4jMOJAc= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=quarantine dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1754311437007696.560166409128; Mon, 4 Aug 2025 05:43:57 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.1069175.1433036 (Exim 4.92) (envelope-from ) id 1uiuXY-0008FI-Ix; Mon, 04 Aug 2025 12:43:44 +0000 Received: by outflank-mailman (output) from mailman id 1069175.1433036; Mon, 04 Aug 2025 12:43:44 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1uiuXY-0008F0-EX; Mon, 04 Aug 2025 12:43:44 +0000 Received: by outflank-mailman (input) for mailman id 1069175; Mon, 04 Aug 2025 12:43:42 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1uiuXW-0006VD-Il for xen-devel@lists.xenproject.org; Mon, 04 Aug 2025 12:43:42 +0000 Received: from sea.source.kernel.org (sea.source.kernel.org [172.234.252.31]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id a60fbd40-7130-11f0-a321-13f23c93f187; Mon, 04 Aug 2025 14:43:41 +0200 (CEST) Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sea.source.kernel.org (Postfix) with ESMTP id 5EDCF4423C; Mon, 4 Aug 2025 12:43:40 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id D6868C4CEF0; Mon, 4 Aug 2025 12:43:38 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: a60fbd40-7130-11f0-a321-13f23c93f187 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1754311420; bh=KuB85oQ1zEvHewxjvaa7TgR+vPeGP91gwiah2Yizc2c=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=hwpc/2Lt2D3anuuHX/CywReJ+JevkzY6FynhQNLZcPcidH3a8z4Y3cZmYKqJx40aB HuWBeZEysO6t3oSzFklaWg8fzR58b4rw4vPFNALiUc9A9irWH9TDeW0nOLlTcNMjg5 8y75jvY8oofTniAsxP4pz8Mu3KaSfgmilMiaTliZ/mjlKSNBhY+c10mFYtQHZ4AR/7 Xmk6xDyL82OWzMQWw/B3tiObakK+I7qRBS7g9Wgz6viepi3WB5CZHiWKKRcEEGPKzI Kj0BbJnhOLGzRPSZ09k5YVjeQRC0y6mBjsRtkaL/iwYbfSRrLhH2F4a0OA4eEyJn4C wZIRnvkij7dkA== From: Leon Romanovsky To: Marek Szyprowski Cc: Leon Romanovsky , Jason Gunthorpe , Abdiel Janulgue , Alexander Potapenko , Alex Gaynor , Andrew Morton , Christoph Hellwig , Danilo Krummrich , iommu@lists.linux.dev, Jason Wang , Jens Axboe , Joerg Roedel , Jonathan Corbet , Juergen Gross , kasan-dev@googlegroups.com, Keith Busch , linux-block@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-nvme@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-trace-kernel@vger.kernel.org, Madhavan Srinivasan , Masami Hiramatsu , Michael Ellerman , "Michael S. Tsirkin" , Miguel Ojeda , Robin Murphy , rust-for-linux@vger.kernel.org, Sagi Grimberg , Stefano Stabellini , Steven Rostedt , virtualization@lists.linux.dev, Will Deacon , xen-devel@lists.xenproject.org Subject: [PATCH v1 06/16] iommu/dma: extend iommu_dma_*map_phys API to handle MMIO memory Date: Mon, 4 Aug 2025 15:42:40 +0300 Message-ID: <09c04e0428f422c1b13d2b054af16e719de318a3.1754292567.git.leon@kernel.org> X-Mailer: git-send-email 2.50.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @kernel.org) X-ZM-MESSAGEID: 1754311438376116600 Content-Type: text/plain; charset="utf-8" From: Leon Romanovsky Combine iommu_dma_*map_phys with iommu_dma_*map_resource interfaces in order to allow single phys_addr_t flow. Signed-off-by: Leon Romanovsky --- drivers/iommu/dma-iommu.c | 20 ++++++++++++++++---- 1 file changed, 16 insertions(+), 4 deletions(-) diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index 11c5d5f8c0981..0a19ce50938b3 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -1193,12 +1193,17 @@ static inline size_t iova_unaligned(struct iova_dom= ain *iovad, phys_addr_t phys, dma_addr_t iommu_dma_map_phys(struct device *dev, phys_addr_t phys, size_t= size, enum dma_data_direction dir, unsigned long attrs) { - bool coherent =3D dev_is_dma_coherent(dev); - int prot =3D dma_info_to_prot(dir, coherent, attrs); struct iommu_domain *domain =3D iommu_get_dma_domain(dev); struct iommu_dma_cookie *cookie =3D domain->iova_cookie; struct iova_domain *iovad =3D &cookie->iovad; dma_addr_t iova, dma_mask =3D dma_get_mask(dev); + bool coherent; + int prot; + + if (attrs & DMA_ATTR_MMIO) + return __iommu_dma_map(dev, phys, size, + dma_info_to_prot(dir, false, attrs) | IOMMU_MMIO, + dma_get_mask(dev)); =20 /* * If both the physical buffer start address and size are page aligned, @@ -1211,6 +1216,9 @@ dma_addr_t iommu_dma_map_phys(struct device *dev, phy= s_addr_t phys, size_t size, return DMA_MAPPING_ERROR; } =20 + coherent =3D dev_is_dma_coherent(dev); + prot =3D dma_info_to_prot(dir, coherent, attrs); + if (!coherent && !(attrs & DMA_ATTR_SKIP_CPU_SYNC)) arch_sync_dma_for_device(phys, size, dir); =20 @@ -1223,10 +1231,14 @@ dma_addr_t iommu_dma_map_phys(struct device *dev, p= hys_addr_t phys, size_t size, void iommu_dma_unmap_phys(struct device *dev, dma_addr_t dma_handle, size_t size, enum dma_data_direction dir, unsigned long attrs) { - struct iommu_domain *domain =3D iommu_get_dma_domain(dev); phys_addr_t phys; =20 - phys =3D iommu_iova_to_phys(domain, dma_handle); + if (attrs & DMA_ATTR_MMIO) { + __iommu_dma_unmap(dev, dma_handle, size); + return; + } + + phys =3D iommu_iova_to_phys(iommu_get_dma_domain(dev), dma_handle); if (WARN_ON(!phys)) return; =20 --=20 2.50.1 From nobody Sun Oct 5 12:46:17 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=quarantine dis=none) header.from=kernel.org ARC-Seal: i=1; a=rsa-sha256; t=1754311438; cv=none; d=zohomail.com; s=zohoarc; b=nN+vgl2NYXAPseuzk3zVBSdOPCzj8SNMCFZQIszT1iQQGA37COr1Ba2Qu4FNrSZPwa7pk7APwR4keUjUzgDG4FJsFArG9FMgeueOXULGC6Rz2+iqo+maP/fI/LLm8MBe7p26c20W16oFeMWvwTCB3DsiEMB8ILjRGJVxalmLzS4= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1754311438; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=u2ZFZgNbfoXn5jH3X4iUOrDOvE7XGwUp7euhaxLVE2Q=; b=UeYzFBdN1yjoUu+n5ZBZ6zanE51aO/E6KjRhF/P4bAk//fQFUvI/03EDRqNnOzt9CjiunGqc0TioCvqyYwyCgpTQBR7SvIL/LzHr9/fzwi5UptzrWttgEvctr0ciP1tBoZp/dnJwdd6Gv2k7A/PggSPEYy+bACCEQcXJ2nLCMFg= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=quarantine dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1754311438482461.0459613510669; Mon, 4 Aug 2025 05:43:58 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.1069181.1433046 (Exim 4.92) (envelope-from ) id 1uiuXc-0000DM-QL; Mon, 04 Aug 2025 12:43:48 +0000 Received: by outflank-mailman (output) from mailman id 1069181.1433046; Mon, 04 Aug 2025 12:43:48 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1uiuXc-0000DC-Lh; Mon, 04 Aug 2025 12:43:48 +0000 Received: by outflank-mailman (input) for mailman id 1069181; Mon, 04 Aug 2025 12:43:47 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1uiuXb-0006VD-5w for xen-devel@lists.xenproject.org; Mon, 04 Aug 2025 12:43:47 +0000 Received: from tor.source.kernel.org (tor.source.kernel.org [2600:3c04:e001:324:0:1991:8:25]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id a9037b29-7130-11f0-a321-13f23c93f187; Mon, 04 Aug 2025 14:43:46 +0200 (CEST) Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by tor.source.kernel.org (Postfix) with ESMTP id AE031601FD; Mon, 4 Aug 2025 12:43:44 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9423FC4CEE7; Mon, 4 Aug 2025 12:43:43 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: a9037b29-7130-11f0-a321-13f23c93f187 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1754311424; bh=974P2mquMC7dLPtwo4RbF3aKdnRgjXusEE0OKr9VEPo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=sRzNIswyOXbP49BscDBQ+94sPL3uQydFVa6yD3MLC4jh0WkntDIM+qnsOlQuXDIVr wxnQwyeKJFQGPd7H59/m+DRXV9WGkkfngBQYFQnV2Vx5zdbDTmlp3SKPzzugttJ5wK Y94WaLaho7OI9fqCDv7wnjA4LbMPxMu1m7XNoFz9vxQttjkrfNEZgFRlKOgE5FUG9O 9S+EDseCadKtr6Zd8jY55mBnPek99fWQfl0LLIwkcioEL+wWEh9akcqBS/aNmw3uOI i2gfa9Kf8zAEiD7yipdlK7kOpCvN+05lo1g4pMkP27RWr31YBzoQ7Nv4fuU8IiE/Yw SuLv6jnr2CKZg== From: Leon Romanovsky To: Marek Szyprowski Cc: Leon Romanovsky , Jason Gunthorpe , Abdiel Janulgue , Alexander Potapenko , Alex Gaynor , Andrew Morton , Christoph Hellwig , Danilo Krummrich , iommu@lists.linux.dev, Jason Wang , Jens Axboe , Joerg Roedel , Jonathan Corbet , Juergen Gross , kasan-dev@googlegroups.com, Keith Busch , linux-block@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-nvme@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-trace-kernel@vger.kernel.org, Madhavan Srinivasan , Masami Hiramatsu , Michael Ellerman , "Michael S. Tsirkin" , Miguel Ojeda , Robin Murphy , rust-for-linux@vger.kernel.org, Sagi Grimberg , Stefano Stabellini , Steven Rostedt , virtualization@lists.linux.dev, Will Deacon , xen-devel@lists.xenproject.org Subject: [PATCH v1 07/16] dma-mapping: convert dma_direct_*map_page to be phys_addr_t based Date: Mon, 4 Aug 2025 15:42:41 +0300 Message-ID: <882499bb37bf4af3dece27d9f791a8982ca4c6a7.1754292567.git.leon@kernel.org> X-Mailer: git-send-email 2.50.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @kernel.org) X-ZM-MESSAGEID: 1754311458793124100 Content-Type: text/plain; charset="utf-8" From: Leon Romanovsky Convert the DMA direct mapping functions to accept physical addresses directly instead of page+offset parameters. The functions were already operating on physical addresses internally, so this change eliminates the redundant page-to-physical conversion at the API boundary. The functions dma_direct_map_page() and dma_direct_unmap_page() are renamed to dma_direct_map_phys() and dma_direct_unmap_phys() respectively, with their calling convention changed from (struct page *page, unsigned long offset) to (phys_addr_t phys). Architecture-specific functions arch_dma_map_page_direct() and arch_dma_unmap_page_direct() are similarly renamed to arch_dma_map_phys_direct() and arch_dma_unmap_phys_direct(). The is_pci_p2pdma_page() checks are replaced with DMA_ATTR_MMIO checks to allow integration with dma_direct_map_resource and dma_direct_map_phys() is extended to support MMIO path either. Signed-off-by: Leon Romanovsky --- arch/powerpc/kernel/dma-iommu.c | 4 +-- include/linux/dma-map-ops.h | 8 +++--- kernel/dma/direct.c | 6 ++-- kernel/dma/direct.h | 50 ++++++++++++++++++++------------- kernel/dma/mapping.c | 8 +++--- 5 files changed, 44 insertions(+), 32 deletions(-) diff --git a/arch/powerpc/kernel/dma-iommu.c b/arch/powerpc/kernel/dma-iomm= u.c index 4d64a5db50f38..0359ab72cd3ba 100644 --- a/arch/powerpc/kernel/dma-iommu.c +++ b/arch/powerpc/kernel/dma-iommu.c @@ -14,7 +14,7 @@ #define can_map_direct(dev, addr) \ ((dev)->bus_dma_limit >=3D phys_to_dma((dev), (addr))) =20 -bool arch_dma_map_page_direct(struct device *dev, phys_addr_t addr) +bool arch_dma_map_phys_direct(struct device *dev, phys_addr_t addr) { if (likely(!dev->bus_dma_limit)) return false; @@ -24,7 +24,7 @@ bool arch_dma_map_page_direct(struct device *dev, phys_ad= dr_t addr) =20 #define is_direct_handle(dev, h) ((h) >=3D (dev)->archdata.dma_offset) =20 -bool arch_dma_unmap_page_direct(struct device *dev, dma_addr_t dma_handle) +bool arch_dma_unmap_phys_direct(struct device *dev, dma_addr_t dma_handle) { if (likely(!dev->bus_dma_limit)) return false; diff --git a/include/linux/dma-map-ops.h b/include/linux/dma-map-ops.h index f48e5fb88bd5d..71f5b30254159 100644 --- a/include/linux/dma-map-ops.h +++ b/include/linux/dma-map-ops.h @@ -392,15 +392,15 @@ void *arch_dma_set_uncached(void *addr, size_t size); void arch_dma_clear_uncached(void *addr, size_t size); =20 #ifdef CONFIG_ARCH_HAS_DMA_MAP_DIRECT -bool arch_dma_map_page_direct(struct device *dev, phys_addr_t addr); -bool arch_dma_unmap_page_direct(struct device *dev, dma_addr_t dma_handle); +bool arch_dma_map_phys_direct(struct device *dev, phys_addr_t addr); +bool arch_dma_unmap_phys_direct(struct device *dev, dma_addr_t dma_handle); bool arch_dma_map_sg_direct(struct device *dev, struct scatterlist *sg, int nents); bool arch_dma_unmap_sg_direct(struct device *dev, struct scatterlist *sg, int nents); #else -#define arch_dma_map_page_direct(d, a) (false) -#define arch_dma_unmap_page_direct(d, a) (false) +#define arch_dma_map_phys_direct(d, a) (false) +#define arch_dma_unmap_phys_direct(d, a) (false) #define arch_dma_map_sg_direct(d, s, n) (false) #define arch_dma_unmap_sg_direct(d, s, n) (false) #endif diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index 24c359d9c8799..fa75e30700730 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -453,7 +453,7 @@ void dma_direct_unmap_sg(struct device *dev, struct sca= tterlist *sgl, if (sg_dma_is_bus_address(sg)) sg_dma_unmark_bus_address(sg); else - dma_direct_unmap_page(dev, sg->dma_address, + dma_direct_unmap_phys(dev, sg->dma_address, sg_dma_len(sg), dir, attrs); } } @@ -476,8 +476,8 @@ int dma_direct_map_sg(struct device *dev, struct scatte= rlist *sgl, int nents, */ break; case PCI_P2PDMA_MAP_NONE: - sg->dma_address =3D dma_direct_map_page(dev, sg_page(sg), - sg->offset, sg->length, dir, attrs); + sg->dma_address =3D dma_direct_map_phys(dev, sg_phys(sg), + sg->length, dir, attrs); if (sg->dma_address =3D=3D DMA_MAPPING_ERROR) { ret =3D -EIO; goto out_unmap; diff --git a/kernel/dma/direct.h b/kernel/dma/direct.h index d2c0b7e632fc0..2b442efc9b5a7 100644 --- a/kernel/dma/direct.h +++ b/kernel/dma/direct.h @@ -80,42 +80,54 @@ static inline void dma_direct_sync_single_for_cpu(struc= t device *dev, arch_dma_mark_clean(paddr, size); } =20 -static inline dma_addr_t dma_direct_map_page(struct device *dev, - struct page *page, unsigned long offset, size_t size, - enum dma_data_direction dir, unsigned long attrs) +static inline dma_addr_t dma_direct_map_phys(struct device *dev, + phys_addr_t phys, size_t size, enum dma_data_direction dir, + unsigned long attrs) { - phys_addr_t phys =3D page_to_phys(page) + offset; - dma_addr_t dma_addr =3D phys_to_dma(dev, phys); + bool is_mmio =3D attrs & DMA_ATTR_MMIO; + dma_addr_t dma_addr; + bool capable; + + dma_addr =3D (is_mmio) ? phys : phys_to_dma(dev, phys); + capable =3D dma_capable(dev, dma_addr, size, is_mmio); + if (is_mmio) { + if (unlikely(!capable)) + goto err_overflow; + return dma_addr; + } =20 - if (is_swiotlb_force_bounce(dev)) { - if (is_pci_p2pdma_page(page)) - return DMA_MAPPING_ERROR; + if (is_swiotlb_force_bounce(dev)) return swiotlb_map(dev, phys, size, dir, attrs); - } =20 - if (unlikely(!dma_capable(dev, dma_addr, size, true)) || - dma_kmalloc_needs_bounce(dev, size, dir)) { - if (is_pci_p2pdma_page(page)) - return DMA_MAPPING_ERROR; + if (unlikely(!capable) || dma_kmalloc_needs_bounce(dev, size, dir)) { if (is_swiotlb_active(dev)) return swiotlb_map(dev, phys, size, dir, attrs); =20 - dev_WARN_ONCE(dev, 1, - "DMA addr %pad+%zu overflow (mask %llx, bus limit %llx).\n", - &dma_addr, size, *dev->dma_mask, dev->bus_dma_limit); - return DMA_MAPPING_ERROR; + goto err_overflow; } =20 if (!dev_is_dma_coherent(dev) && !(attrs & DMA_ATTR_SKIP_CPU_SYNC)) arch_sync_dma_for_device(phys, size, dir); return dma_addr; + +err_overflow: + dev_WARN_ONCE( + dev, 1, + "DMA addr %pad+%zu overflow (mask %llx, bus limit %llx).\n", + &dma_addr, size, *dev->dma_mask, dev->bus_dma_limit); + return DMA_MAPPING_ERROR; } =20 -static inline void dma_direct_unmap_page(struct device *dev, dma_addr_t ad= dr, +static inline void dma_direct_unmap_phys(struct device *dev, dma_addr_t ad= dr, size_t size, enum dma_data_direction dir, unsigned long attrs) { - phys_addr_t phys =3D dma_to_phys(dev, addr); + phys_addr_t phys; + + if (attrs & DMA_ATTR_MMIO) + /* nothing to do: uncached and no swiotlb */ + return; =20 + phys =3D dma_to_phys(dev, addr); if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC)) dma_direct_sync_single_for_cpu(dev, addr, size, dir); =20 diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c index 58482536db9bb..80481a873340a 100644 --- a/kernel/dma/mapping.c +++ b/kernel/dma/mapping.c @@ -166,8 +166,8 @@ dma_addr_t dma_map_page_attrs(struct device *dev, struc= t page *page, return DMA_MAPPING_ERROR; =20 if (dma_map_direct(dev, ops) || - arch_dma_map_page_direct(dev, phys + size)) - addr =3D dma_direct_map_page(dev, page, offset, size, dir, attrs); + arch_dma_map_phys_direct(dev, phys + size)) + addr =3D dma_direct_map_phys(dev, phys, size, dir, attrs); else if (use_dma_iommu(dev)) addr =3D iommu_dma_map_phys(dev, phys, size, dir, attrs); else @@ -187,8 +187,8 @@ void dma_unmap_page_attrs(struct device *dev, dma_addr_= t addr, size_t size, =20 BUG_ON(!valid_dma_direction(dir)); if (dma_map_direct(dev, ops) || - arch_dma_unmap_page_direct(dev, addr + size)) - dma_direct_unmap_page(dev, addr, size, dir, attrs); + arch_dma_unmap_phys_direct(dev, addr + size)) + dma_direct_unmap_phys(dev, addr, size, dir, attrs); else if (use_dma_iommu(dev)) iommu_dma_unmap_phys(dev, addr, size, dir, attrs); else --=20 2.50.1 From nobody Sun Oct 5 12:46:17 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=quarantine dis=none) header.from=kernel.org ARC-Seal: i=1; a=rsa-sha256; t=1754311924; cv=none; d=zohomail.com; s=zohoarc; b=Pth/JNrl7asNAtqYZt6evjpN6ZxwnNwUmaF7I3C5lVZyxVsf4ziDKj8FppKem8g6tYELzhttsv7boflamlnsx8NWBd77t0td5wbwtxVFGHl7axTDRtw0gfOKb0GjVMX5nnET6GtrFvLv7+th79BMjyvoR2Y+l4k9uG9ibIiol/E= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1754311924; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=rGuRtbHmiVjjoyecHucW808Sk+vwQ2t58s6Fmx6aNN0=; b=UyUct7h+FXklXgJx6u27Y1Oa88ZqWh1F2mAbBywO1ChTT9zNjYKqSqxJY8DYZPW1439N6fhquWc64qjxMpjq40kVcigOsHbPg9W+jboLKoScraOf4rUBUNQZzsouy3uCEku3cI0TL1xY+uAjBQjnJBc4SRJhMFBCZNzZfnwfj/U= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=quarantine dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1754311924669877.3897738726806; Mon, 4 Aug 2025 05:52:04 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.1069256.1433136 (Exim 4.92) (envelope-from ) id 1uiufQ-0006Zj-TH; Mon, 04 Aug 2025 12:51:52 +0000 Received: by outflank-mailman (output) from mailman id 1069256.1433136; Mon, 04 Aug 2025 12:51:52 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1uiufQ-0006ZZ-PR; Mon, 04 Aug 2025 12:51:52 +0000 Received: by outflank-mailman (input) for mailman id 1069256; Mon, 04 Aug 2025 12:51:51 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1uiuXt-0006V7-PH for xen-devel@lists.xenproject.org; Mon, 04 Aug 2025 12:44:05 +0000 Received: from sea.source.kernel.org (sea.source.kernel.org [172.234.252.31]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id b19df0c1-7130-11f0-b898-0df219b8e170; Mon, 04 Aug 2025 14:44:01 +0200 (CEST) Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sea.source.kernel.org (Postfix) with ESMTP id BB55A43C23; Mon, 4 Aug 2025 12:43:59 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 82117C4CEF8; Mon, 4 Aug 2025 12:43:58 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: b19df0c1-7130-11f0-b898-0df219b8e170 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1754311439; bh=KD5XpS99kfAtzNwNr0G2cKeknKa6uIV3HkFwkgt52Og=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=REzkJIVPP9gFnq006IpzWA33deEBq3gq/L07hYxS4b9Co9lY/j51ovFGlZKxGO1e5 4+8sYBN0i9v9K0mgKzmZ/QHCuA9Q46V87wtCzHKFC0ANzBr9PuysZxQhOrNozb4Qyk VihVy8omfI8O2IRdV4Tyl6d0KjHEM3acEvXoNbj8+O8OXRRPHQMKadb9piVN0E1Wap K/UpRFgSWAcvtvvkCiTXu1nv84wDzdAsa9r8/C/wTH34YjDjAVJPlKm4jsTlVi+zRC U+eUEQKpw0KzkD87pD1mxwxAbEEaEpX1KJV4/DAHcK3vw7alfBmL84qIH19KHyRx/5 kND7FSCrGya4g== From: Leon Romanovsky To: Marek Szyprowski Cc: Leon Romanovsky , Jason Gunthorpe , Abdiel Janulgue , Alexander Potapenko , Alex Gaynor , Andrew Morton , Christoph Hellwig , Danilo Krummrich , iommu@lists.linux.dev, Jason Wang , Jens Axboe , Joerg Roedel , Jonathan Corbet , Juergen Gross , kasan-dev@googlegroups.com, Keith Busch , linux-block@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-nvme@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-trace-kernel@vger.kernel.org, Madhavan Srinivasan , Masami Hiramatsu , Michael Ellerman , "Michael S. Tsirkin" , Miguel Ojeda , Robin Murphy , rust-for-linux@vger.kernel.org, Sagi Grimberg , Stefano Stabellini , Steven Rostedt , virtualization@lists.linux.dev, Will Deacon , xen-devel@lists.xenproject.org Subject: [PATCH v1 08/16] kmsan: convert kmsan_handle_dma to use physical addresses Date: Mon, 4 Aug 2025 15:42:42 +0300 Message-ID: <5b40377b621e49ff4107fa10646c828ccc94e53e.1754292567.git.leon@kernel.org> X-Mailer: git-send-email 2.50.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @kernel.org) X-ZM-MESSAGEID: 1754311927020116600 Content-Type: text/plain; charset="utf-8" From: Leon Romanovsky Convert the KMSAN DMA handling function from page-based to physical address-based interface. The refactoring renames kmsan_handle_dma() parameters from accepting (struct page *page, size_t offset, size_t size) to (phys_addr_t phys, size_t size). A PFN_VALID check is added to prevent KMSAN operations on non-page memory, preventing from non struct page backed address, As part of this change, support for highmem addresses is implemented using kmap_local_page() to handle both lowmem and highmem regions properly. All callers throughout the codebase are updated to use the new phys_addr_t based interface. Signed-off-by: Leon Romanovsky --- drivers/virtio/virtio_ring.c | 4 ++-- include/linux/kmsan.h | 12 +++++++----- kernel/dma/mapping.c | 2 +- mm/kmsan/hooks.c | 36 +++++++++++++++++++++++++++++------- tools/virtio/linux/kmsan.h | 2 +- 5 files changed, 40 insertions(+), 16 deletions(-) diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c index f5062061c4084..c147145a65930 100644 --- a/drivers/virtio/virtio_ring.c +++ b/drivers/virtio/virtio_ring.c @@ -378,7 +378,7 @@ static int vring_map_one_sg(const struct vring_virtqueu= e *vq, struct scatterlist * is initialized by the hardware. Explicitly check/unpoison it * depending on the direction. */ - kmsan_handle_dma(sg_page(sg), sg->offset, sg->length, direction); + kmsan_handle_dma(sg_phys(sg), sg->length, direction); *addr =3D (dma_addr_t)sg_phys(sg); return 0; } @@ -3157,7 +3157,7 @@ dma_addr_t virtqueue_dma_map_single_attrs(struct virt= queue *_vq, void *ptr, struct vring_virtqueue *vq =3D to_vvq(_vq); =20 if (!vq->use_dma_api) { - kmsan_handle_dma(virt_to_page(ptr), offset_in_page(ptr), size, dir); + kmsan_handle_dma(virt_to_phys(ptr), size, dir); return (dma_addr_t)virt_to_phys(ptr); } =20 diff --git a/include/linux/kmsan.h b/include/linux/kmsan.h index 2b1432cc16d59..6f27b9824ef77 100644 --- a/include/linux/kmsan.h +++ b/include/linux/kmsan.h @@ -182,8 +182,7 @@ void kmsan_iounmap_page_range(unsigned long start, unsi= gned long end); =20 /** * kmsan_handle_dma() - Handle a DMA data transfer. - * @page: first page of the buffer. - * @offset: offset of the buffer within the first page. + * @phys: physical address of the buffer. * @size: buffer size. * @dir: one of possible dma_data_direction values. * @@ -191,8 +190,11 @@ void kmsan_iounmap_page_range(unsigned long start, uns= igned long end); * * checks the buffer, if it is copied to device; * * initializes the buffer, if it is copied from device; * * does both, if this is a DMA_BIDIRECTIONAL transfer. + * + * The function handles page lookup internally and supports both lowmem + * and highmem addresses. */ -void kmsan_handle_dma(struct page *page, size_t offset, size_t size, +void kmsan_handle_dma(phys_addr_t phys, size_t size, enum dma_data_direction dir); =20 /** @@ -372,8 +374,8 @@ static inline void kmsan_iounmap_page_range(unsigned lo= ng start, { } =20 -static inline void kmsan_handle_dma(struct page *page, size_t offset, - size_t size, enum dma_data_direction dir) +static inline void kmsan_handle_dma(phys_addr_t phys, size_t size, + enum dma_data_direction dir) { } =20 diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c index 80481a873340a..709405d46b2b4 100644 --- a/kernel/dma/mapping.c +++ b/kernel/dma/mapping.c @@ -172,7 +172,7 @@ dma_addr_t dma_map_page_attrs(struct device *dev, struc= t page *page, addr =3D iommu_dma_map_phys(dev, phys, size, dir, attrs); else addr =3D ops->map_page(dev, page, offset, size, dir, attrs); - kmsan_handle_dma(page, offset, size, dir); + kmsan_handle_dma(phys, size, dir); trace_dma_map_phys(dev, phys, addr, size, dir, attrs); debug_dma_map_phys(dev, phys, size, dir, addr, attrs); =20 diff --git a/mm/kmsan/hooks.c b/mm/kmsan/hooks.c index 97de3d6194f07..eab7912a3bf05 100644 --- a/mm/kmsan/hooks.c +++ b/mm/kmsan/hooks.c @@ -336,25 +336,48 @@ static void kmsan_handle_dma_page(const void *addr, s= ize_t size, } =20 /* Helper function to handle DMA data transfers. */ -void kmsan_handle_dma(struct page *page, size_t offset, size_t size, +void kmsan_handle_dma(phys_addr_t phys, size_t size, enum dma_data_direction dir) { u64 page_offset, to_go, addr; + struct page *page; + void *kaddr; =20 - if (PageHighMem(page)) + if (!pfn_valid(PHYS_PFN(phys))) return; - addr =3D (u64)page_address(page) + offset; + + page =3D phys_to_page(phys); + page_offset =3D offset_in_page(phys); + /* * The kernel may occasionally give us adjacent DMA pages not belonging * to the same allocation. Process them separately to avoid triggering * internal KMSAN checks. */ while (size > 0) { - page_offset =3D offset_in_page(addr); to_go =3D min(PAGE_SIZE - page_offset, (u64)size); + + if (PageHighMem(page)) + /* Handle highmem pages using kmap */ + kaddr =3D kmap_local_page(page); + else + /* Lowmem pages can be accessed directly */ + kaddr =3D page_address(page); + + addr =3D (u64)kaddr + page_offset; kmsan_handle_dma_page((void *)addr, to_go, dir); - addr +=3D to_go; + + if (PageHighMem(page)) + kunmap_local(page); + + phys +=3D to_go; size -=3D to_go; + + /* Move to next page if needed */ + if (size > 0) { + page =3D phys_to_page(phys); + page_offset =3D offset_in_page(phys); + } } } EXPORT_SYMBOL_GPL(kmsan_handle_dma); @@ -366,8 +389,7 @@ void kmsan_handle_dma_sg(struct scatterlist *sg, int ne= nts, int i; =20 for_each_sg(sg, item, nents, i) - kmsan_handle_dma(sg_page(item), item->offset, item->length, - dir); + kmsan_handle_dma(sg_phys(item), item->length, dir); } =20 /* Functions from kmsan-checks.h follow. */ diff --git a/tools/virtio/linux/kmsan.h b/tools/virtio/linux/kmsan.h index 272b5aa285d5a..6cd2e3efd03dc 100644 --- a/tools/virtio/linux/kmsan.h +++ b/tools/virtio/linux/kmsan.h @@ -4,7 +4,7 @@ =20 #include =20 -inline void kmsan_handle_dma(struct page *page, size_t offset, size_t size, +inline void kmsan_handle_dma(phys_addr_t phys, size_t size, enum dma_data_direction dir) { } --=20 2.50.1 From nobody Sun Oct 5 12:46:17 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=quarantine dis=none) header.from=kernel.org ARC-Seal: i=1; a=rsa-sha256; t=1754311898; cv=none; d=zohomail.com; s=zohoarc; b=MGpK/O9FG/wGBeGlB4EumcJ1m5yvvrOmgI9aB/U9LsEN3/AiRy1yl1qczCpQj1uXSG/8SoZTQNv7AVVZ8DV8s02ALttBQ+QTbTBGb20NVezYbkxm+TY30dZ5TnnWpnfMMdzB+Eu+ANMaa9KJTwxDuxPBTHmSu8Q0EG9kJF9JMOE= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1754311898; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=6HioLDHG809Vrmu++exqw36yATpmFSDSvFtTDDJtgR8=; b=A82YUMLkSD8UayS0A3FgKGjKqAPJ6lN2JpWmDIbkfGMWlg0/nquTpeR9/TcfZDWvSBj8RKluN2bmdWbrJJ/Hvpa773dcts8CdT4UpnojVKn5AxfUGYZC1YXj0MR658yY4dI7vuhUjDi268o68nXmo/aNXYhlIa61cLz6aE2lA38= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=quarantine dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1754311898698398.48493125052505; Mon, 4 Aug 2025 05:51:38 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.1069219.1433096 (Exim 4.92) (envelope-from ) id 1uiuf2-0004SQ-P9; Mon, 04 Aug 2025 12:51:28 +0000 Received: by outflank-mailman (output) from mailman id 1069219.1433096; Mon, 04 Aug 2025 12:51:28 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1uiuf2-0004SH-KP; Mon, 04 Aug 2025 12:51:28 +0000 Received: by outflank-mailman (input) for mailman id 1069219; Mon, 04 Aug 2025 12:51:27 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1uiuXl-0006V7-OS for xen-devel@lists.xenproject.org; Mon, 04 Aug 2025 12:43:57 +0000 Received: from nyc.source.kernel.org (nyc.source.kernel.org [147.75.193.91]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id aeec951e-7130-11f0-b898-0df219b8e170; Mon, 04 Aug 2025 14:43:56 +0200 (CEST) Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by nyc.source.kernel.org (Postfix) with ESMTP id 52057A55826; Mon, 4 Aug 2025 12:43:55 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 91EBCC4CEF0; Mon, 4 Aug 2025 12:43:53 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: aeec951e-7130-11f0-b898-0df219b8e170 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1754311435; bh=j/IXC/mk0CucPDkDZSOLBQ/UHxOlDcVl+SoZNP0e+X0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ihw3Mu90j8p2iuP5oqZQ0qRYRAR/2eX9T50SB8VLM/IpeY+W5wW61GhW3av4wc29C 5rQPXs1M+s6jjy1+nIw6SWxZuIavX26CaEl38gSuxVMHKz9Ozga++BdX/JbT+AreaD thi1puaSRP5nUcta2FCkabbmnkHrUDPmtBsN8oUFKF0vUAnM1ZHd3jM+br9WjySLQ8 NTL3Vo53fSv59+EK2VgBP0cZFK4osZ7IM6oRQqchl8AnWygksQAVNXy453HYzPo2QW I3SOs53kRy1kdIAph2yeNZ5VK6beeqdsIxQeMAXinFMWTJElchVQzM+2upe9a5Me8M xXtsI8WERuQcg== From: Leon Romanovsky To: Marek Szyprowski Cc: Leon Romanovsky , Jason Gunthorpe , Abdiel Janulgue , Alexander Potapenko , Alex Gaynor , Andrew Morton , Christoph Hellwig , Danilo Krummrich , iommu@lists.linux.dev, Jason Wang , Jens Axboe , Joerg Roedel , Jonathan Corbet , Juergen Gross , kasan-dev@googlegroups.com, Keith Busch , linux-block@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-nvme@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-trace-kernel@vger.kernel.org, Madhavan Srinivasan , Masami Hiramatsu , Michael Ellerman , "Michael S. Tsirkin" , Miguel Ojeda , Robin Murphy , rust-for-linux@vger.kernel.org, Sagi Grimberg , Stefano Stabellini , Steven Rostedt , virtualization@lists.linux.dev, Will Deacon , xen-devel@lists.xenproject.org Subject: [PATCH v1 09/16] dma-mapping: handle MMIO flow in dma_map|unmap_page Date: Mon, 4 Aug 2025 15:42:43 +0300 Message-ID: <152745932ce4200e4baaedcc59ef45c230e47896.1754292567.git.leon@kernel.org> X-Mailer: git-send-email 2.50.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @kernel.org) X-ZM-MESSAGEID: 1754311901139124100 Content-Type: text/plain; charset="utf-8" From: Leon Romanovsky Extend base DMA page API to handle MMIO flow. Signed-off-by: Leon Romanovsky --- kernel/dma/mapping.c | 24 ++++++++++++++++++++---- 1 file changed, 20 insertions(+), 4 deletions(-) diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c index 709405d46b2b4..f5f051737e556 100644 --- a/kernel/dma/mapping.c +++ b/kernel/dma/mapping.c @@ -158,6 +158,7 @@ dma_addr_t dma_map_page_attrs(struct device *dev, struc= t page *page, { const struct dma_map_ops *ops =3D get_dma_ops(dev); phys_addr_t phys =3D page_to_phys(page) + offset; + bool is_mmio =3D attrs & DMA_ATTR_MMIO; dma_addr_t addr; =20 BUG_ON(!valid_dma_direction(dir)); @@ -166,12 +167,23 @@ dma_addr_t dma_map_page_attrs(struct device *dev, str= uct page *page, return DMA_MAPPING_ERROR; =20 if (dma_map_direct(dev, ops) || - arch_dma_map_phys_direct(dev, phys + size)) + (!is_mmio && arch_dma_map_phys_direct(dev, phys + size))) addr =3D dma_direct_map_phys(dev, phys, size, dir, attrs); else if (use_dma_iommu(dev)) addr =3D iommu_dma_map_phys(dev, phys, size, dir, attrs); - else + else if (is_mmio) { + if (!ops->map_resource) + return DMA_MAPPING_ERROR; + + addr =3D ops->map_resource(dev, phys, size, dir, attrs); + } else { + /* + * All platforms which implement .map_page() don't support + * non-struct page backed addresses. + */ addr =3D ops->map_page(dev, page, offset, size, dir, attrs); + } + kmsan_handle_dma(phys, size, dir); trace_dma_map_phys(dev, phys, addr, size, dir, attrs); debug_dma_map_phys(dev, phys, size, dir, addr, attrs); @@ -184,14 +196,18 @@ void dma_unmap_page_attrs(struct device *dev, dma_add= r_t addr, size_t size, enum dma_data_direction dir, unsigned long attrs) { const struct dma_map_ops *ops =3D get_dma_ops(dev); + bool is_mmio =3D attrs & DMA_ATTR_MMIO; =20 BUG_ON(!valid_dma_direction(dir)); if (dma_map_direct(dev, ops) || - arch_dma_unmap_phys_direct(dev, addr + size)) + (!is_mmio && arch_dma_unmap_phys_direct(dev, addr + size))) dma_direct_unmap_phys(dev, addr, size, dir, attrs); else if (use_dma_iommu(dev)) iommu_dma_unmap_phys(dev, addr, size, dir, attrs); - else + else if (is_mmio) { + if (ops->unmap_resource) + ops->unmap_resource(dev, addr, size, dir, attrs); + } else ops->unmap_page(dev, addr, size, dir, attrs); trace_dma_unmap_phys(dev, addr, size, dir, attrs); debug_dma_unmap_phys(dev, addr, size, dir); --=20 2.50.1 From nobody Sun Oct 5 12:46:17 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=quarantine dis=none) header.from=kernel.org ARC-Seal: i=1; a=rsa-sha256; t=1754311881; cv=none; d=zohomail.com; s=zohoarc; b=VpqaoPx7uuMVLmU0GAsLFBKKBJXoKr2EzUFUFSjVv+kZhg5At92HIuG1LiK3v7jaPHjf8YjgFz8+x66I3tUY3FpBynuxfmPDWJKEDDFN/eVY2ythb5Ot7L5AcA/M7Un7h7RYM43S6datG/Jy3/35tjsMt3TgDuPFLxFMRqavtU4= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1754311881; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=njJVaIqIe6sqKLpFp72s28bWMOkerJgS7/NyUH+svec=; b=mDQ9U0UNtpCV/bRUjjMwQsSVKBsftg+e9p5ICQVSQo7nXTwkt2lWDyYTPDTTqjdtJUdk1Gyazdk1yQgYNzYw/b4SiOBERTZ+Cbdzaa9jW9l7BUrc0hglG+adkkeJrca+eMrfX+qjHEz/1ieIl2EpK/LziL7MSY74mjkXGfbZfw8= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=quarantine dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1754311881299700.8672530504361; Mon, 4 Aug 2025 05:51:21 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.1069208.1433066 (Exim 4.92) (envelope-from ) id 1uiuee-0003Nv-SG; Mon, 04 Aug 2025 12:51:04 +0000 Received: by outflank-mailman (output) from mailman id 1069208.1433066; Mon, 04 Aug 2025 12:51:04 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1uiuee-0003No-P6; Mon, 04 Aug 2025 12:51:04 +0000 Received: by outflank-mailman (input) for mailman id 1069208; Mon, 04 Aug 2025 12:51:03 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1uiuY5-0006V7-E5 for xen-devel@lists.xenproject.org; Mon, 04 Aug 2025 12:44:17 +0000 Received: from nyc.source.kernel.org (nyc.source.kernel.org [147.75.193.91]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id baab2ecc-7130-11f0-b898-0df219b8e170; Mon, 04 Aug 2025 14:44:15 +0200 (CEST) Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by nyc.source.kernel.org (Postfix) with ESMTP id 0920DA55826; Mon, 4 Aug 2025 12:44:15 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 58081C4CEF0; Mon, 4 Aug 2025 12:44:13 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: baab2ecc-7130-11f0-b898-0df219b8e170 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1754311454; bh=qnVGWPERMw1BTa2/gk85Rr9nCLsamOSAPU4HKvyJS1c=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=OtgAnXB9G/hO4kHiG6lUbx1lGdhplK8QSnQvERbDkZeJ2rBPi5BohP2fOcU1EY1H0 LWK7JmM6eYjvDV85Iv2oqAeUvxpoecgv6pnhGevxoOAJjwoIKIBVlnyhDtWl6iMFdx RkkwEUkNj1PK4RIEa7lvhblH+3HASR9LDXUdd9ooPuLmkTQNLafsqp7vExm+lHX79L xgsW9Dlqtb1H+ROfUItFyQuMNgUa0HHgvJ+5cBoMm5E0OKANto/+eCMPooaSWD2S6k W4Hqu5Itep2BtJPMCEt1CECg7NNUHsl45+R6007i0eogZT7o+A+uJhsGr5xPHo73g0 Vn+VRBXex4Xsg== From: Leon Romanovsky To: Marek Szyprowski Cc: Leon Romanovsky , Jason Gunthorpe , Abdiel Janulgue , Alexander Potapenko , Alex Gaynor , Andrew Morton , Christoph Hellwig , Danilo Krummrich , iommu@lists.linux.dev, Jason Wang , Jens Axboe , Joerg Roedel , Jonathan Corbet , Juergen Gross , kasan-dev@googlegroups.com, Keith Busch , linux-block@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-nvme@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-trace-kernel@vger.kernel.org, Madhavan Srinivasan , Masami Hiramatsu , Michael Ellerman , "Michael S. Tsirkin" , Miguel Ojeda , Robin Murphy , rust-for-linux@vger.kernel.org, Sagi Grimberg , Stefano Stabellini , Steven Rostedt , virtualization@lists.linux.dev, Will Deacon , xen-devel@lists.xenproject.org Subject: [PATCH v1 10/16] xen: swiotlb: Open code map_resource callback Date: Mon, 4 Aug 2025 15:42:44 +0300 Message-ID: X-Mailer: git-send-email 2.50.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @kernel.org) X-ZM-MESSAGEID: 1754311882967124100 Content-Type: text/plain; charset="utf-8" From: Leon Romanovsky General dma_direct_map_resource() is going to be removed in next patch, so simply open-code it in xen driver. Signed-off-by: Leon Romanovsky Reviewed-by: Juergen Gross --- drivers/xen/swiotlb-xen.c | 21 ++++++++++++++++++++- 1 file changed, 20 insertions(+), 1 deletion(-) diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c index da1a7d3d377cf..dd7747a2de879 100644 --- a/drivers/xen/swiotlb-xen.c +++ b/drivers/xen/swiotlb-xen.c @@ -392,6 +392,25 @@ xen_swiotlb_sync_sg_for_device(struct device *dev, str= uct scatterlist *sgl, } } =20 +static dma_addr_t xen_swiotlb_direct_map_resource(struct device *dev, + phys_addr_t paddr, + size_t size, + enum dma_data_direction dir, + unsigned long attrs) +{ + dma_addr_t dma_addr =3D paddr; + + if (unlikely(!dma_capable(dev, dma_addr, size, false))) { + dev_err_once(dev, + "DMA addr %pad+%zu overflow (mask %llx, bus limit %llx).\n", + &dma_addr, size, *dev->dma_mask, dev->bus_dma_limit); + WARN_ON_ONCE(1); + return DMA_MAPPING_ERROR; + } + + return dma_addr; +} + /* * Return whether the given device DMA address mask can be supported * properly. For example, if your device can only drive the low 24-bits @@ -426,5 +445,5 @@ const struct dma_map_ops xen_swiotlb_dma_ops =3D { .alloc_pages_op =3D dma_common_alloc_pages, .free_pages =3D dma_common_free_pages, .max_mapping_size =3D swiotlb_max_mapping_size, - .map_resource =3D dma_direct_map_resource, + .map_resource =3D xen_swiotlb_direct_map_resource, }; --=20 2.50.1 From nobody Sun Oct 5 12:46:17 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=quarantine dis=none) header.from=kernel.org ARC-Seal: i=1; a=rsa-sha256; t=1754311897; cv=none; d=zohomail.com; s=zohoarc; b=kVl8ZudgA+/loOHnta7Ksmkxba92jX04Z0oe/JvC1RomSkxeLZ1kl45NRW7PSYLp7xRUyjL+4Gc8nY9pufyEFNa3wmNMpSx4iEtEnjEu7BPDaOMryHoPSShipnbbrrBpdSg+bh/bYpKR5XcXb6R5pdQN/Yw06H2XS/3V1HL3RPc= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1754311897; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=oozqhYH5WzlQxnRQi/0UiwCCQoDhNmN7489UY899NOI=; b=kjCbLYYNIQEYkT1sg0wWhb4jo/TVyOOdBmvKUTz5drpg1GUmeb+z6zjGEBjqyD5GRCw9SFVPO62qWO1wR4TeY2tOsG+ym9z+IiCEPAuwbpSqBAQGvZBb4SO0A3ANNuJGf78FtA9QD4eFq8Ehxt2YV45Y9iXZUHQOdvkEheyXCy8= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=quarantine dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 175431189770565.34267990916135; Mon, 4 Aug 2025 05:51:37 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.1069217.1433086 (Exim 4.92) (envelope-from ) id 1uiuez-000491-Aq; Mon, 04 Aug 2025 12:51:25 +0000 Received: by outflank-mailman (output) from mailman id 1069217.1433086; Mon, 04 Aug 2025 12:51:25 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1uiuez-00048s-7A; Mon, 04 Aug 2025 12:51:25 +0000 Received: by outflank-mailman (input) for mailman id 1069217; Mon, 04 Aug 2025 12:51:23 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1uiuXu-0006VD-UT for xen-devel@lists.xenproject.org; Mon, 04 Aug 2025 12:44:07 +0000 Received: from nyc.source.kernel.org (nyc.source.kernel.org [147.75.193.91]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id b4ec8bc0-7130-11f0-a321-13f23c93f187; Mon, 04 Aug 2025 14:44:06 +0200 (CEST) Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by nyc.source.kernel.org (Postfix) with ESMTP id 53524A55807; Mon, 4 Aug 2025 12:44:05 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 53748C4CEE7; Mon, 4 Aug 2025 12:44:03 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: b4ec8bc0-7130-11f0-a321-13f23c93f187 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1754311445; bh=nQ8mJNz8JUVPh4M/Ljj9VN6K2zPf9ZLUUVJWTFyBl+A=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=GansV4O33RE7bTiUe/oNxZ1/j8JluPHS14/eOky1Kgw46PimC/h6BaSBUDGRq12+X OkJSrxybJq/HteOgaIzllB8AsL+b7hRdM8DPNeLMHm+vZ/fR/LPNhiUazudBa3ty/O EJ1x4V+MsfVg7VBwvSqZsZVLaRQf9qyLSmRUN2c8y+lw9M06rQodqLnlnf5jTmNx18 LT/LEEm6hKNG2HPfx5SHtIWCLoi1qnrPS19m9ncban08q3hx8pnsDFQqSQLhEQmP8F DMTVJ/Ihc0ToY72sMUBykGK97puqzsvoLrLzWHM0oQIzdVjJNfPXuovmP6lgZeX5pc fRToWSP6g+OEA== From: Leon Romanovsky To: Marek Szyprowski Cc: Leon Romanovsky , Jason Gunthorpe , Abdiel Janulgue , Alexander Potapenko , Alex Gaynor , Andrew Morton , Christoph Hellwig , Danilo Krummrich , iommu@lists.linux.dev, Jason Wang , Jens Axboe , Joerg Roedel , Jonathan Corbet , Juergen Gross , kasan-dev@googlegroups.com, Keith Busch , linux-block@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-nvme@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-trace-kernel@vger.kernel.org, Madhavan Srinivasan , Masami Hiramatsu , Michael Ellerman , "Michael S. Tsirkin" , Miguel Ojeda , Robin Murphy , rust-for-linux@vger.kernel.org, Sagi Grimberg , Stefano Stabellini , Steven Rostedt , virtualization@lists.linux.dev, Will Deacon , xen-devel@lists.xenproject.org Subject: [PATCH v1 11/16] dma-mapping: export new dma_*map_phys() interface Date: Mon, 4 Aug 2025 15:42:45 +0300 Message-ID: X-Mailer: git-send-email 2.50.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @kernel.org) X-ZM-MESSAGEID: 1754311899320124100 Content-Type: text/plain; charset="utf-8" From: Leon Romanovsky Introduce new DMA mapping functions dma_map_phys() and dma_unmap_phys() that operate directly on physical addresses instead of page+offset parameters. This provides a more efficient interface for drivers that already have physical addresses available. The new functions are implemented as the primary mapping layer, with the existing dma_map_page_attrs() and dma_unmap_page_attrs() functions converted to simple wrappers around the phys-based implementations. The old page-based API is preserved in mapping.c to ensure that existing code won't be affected by changing EXPORT_SYMBOL to EXPORT_SYMBOL_GPL variant for dma_*map_phys(). Signed-off-by: Leon Romanovsky --- drivers/iommu/dma-iommu.c | 14 -------- include/linux/dma-direct.h | 2 -- include/linux/dma-mapping.h | 13 +++++++ include/linux/iommu-dma.h | 4 --- include/trace/events/dma.h | 2 -- kernel/dma/debug.c | 43 ----------------------- kernel/dma/debug.h | 21 ------------ kernel/dma/direct.c | 16 --------- kernel/dma/mapping.c | 68 ++++++++++++++++++++----------------- 9 files changed, 49 insertions(+), 134 deletions(-) diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index 0a19ce50938b3..69f85209be7ab 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -1556,20 +1556,6 @@ void iommu_dma_unmap_sg(struct device *dev, struct s= catterlist *sg, int nents, __iommu_dma_unmap(dev, start, end - start); } =20 -dma_addr_t iommu_dma_map_resource(struct device *dev, phys_addr_t phys, - size_t size, enum dma_data_direction dir, unsigned long attrs) -{ - return __iommu_dma_map(dev, phys, size, - dma_info_to_prot(dir, false, attrs) | IOMMU_MMIO, - dma_get_mask(dev)); -} - -void iommu_dma_unmap_resource(struct device *dev, dma_addr_t handle, - size_t size, enum dma_data_direction dir, unsigned long attrs) -{ - __iommu_dma_unmap(dev, handle, size); -} - static void __iommu_dma_free(struct device *dev, size_t size, void *cpu_ad= dr) { size_t alloc_size =3D PAGE_ALIGN(size); diff --git a/include/linux/dma-direct.h b/include/linux/dma-direct.h index f3bc0bcd70980..c249912456f96 100644 --- a/include/linux/dma-direct.h +++ b/include/linux/dma-direct.h @@ -149,7 +149,5 @@ void dma_direct_free_pages(struct device *dev, size_t s= ize, struct page *page, dma_addr_t dma_addr, enum dma_data_direction dir); int dma_direct_supported(struct device *dev, u64 mask); -dma_addr_t dma_direct_map_resource(struct device *dev, phys_addr_t paddr, - size_t size, enum dma_data_direction dir, unsigned long attrs); =20 #endif /* _LINUX_DMA_DIRECT_H */ diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h index afc89835c7457..2aa43a6bed92b 100644 --- a/include/linux/dma-mapping.h +++ b/include/linux/dma-mapping.h @@ -132,6 +132,10 @@ dma_addr_t dma_map_page_attrs(struct device *dev, stru= ct page *page, unsigned long attrs); void dma_unmap_page_attrs(struct device *dev, dma_addr_t addr, size_t size, enum dma_data_direction dir, unsigned long attrs); +dma_addr_t dma_map_phys(struct device *dev, phys_addr_t phys, size_t size, + enum dma_data_direction dir, unsigned long attrs); +void dma_unmap_phys(struct device *dev, dma_addr_t addr, size_t size, + enum dma_data_direction dir, unsigned long attrs); unsigned int dma_map_sg_attrs(struct device *dev, struct scatterlist *sg, int nents, enum dma_data_direction dir, unsigned long attrs); void dma_unmap_sg_attrs(struct device *dev, struct scatterlist *sg, @@ -186,6 +190,15 @@ static inline void dma_unmap_page_attrs(struct device = *dev, dma_addr_t addr, size_t size, enum dma_data_direction dir, unsigned long attrs) { } +static inline dma_addr_t dma_map_phys(struct device *dev, phys_addr_t phys, + size_t size, enum dma_data_direction dir, unsigned long attrs) +{ + return DMA_MAPPING_ERROR; +} +static inline void dma_unmap_phys(struct device *dev, dma_addr_t addr, + size_t size, enum dma_data_direction dir, unsigned long attrs) +{ +} static inline unsigned int dma_map_sg_attrs(struct device *dev, struct scatterlist *sg, int nents, enum dma_data_direction dir, unsigned long attrs) diff --git a/include/linux/iommu-dma.h b/include/linux/iommu-dma.h index 485bdffed9888..a92b3ff9b9343 100644 --- a/include/linux/iommu-dma.h +++ b/include/linux/iommu-dma.h @@ -42,10 +42,6 @@ size_t iommu_dma_opt_mapping_size(void); size_t iommu_dma_max_mapping_size(struct device *dev); void iommu_dma_free(struct device *dev, size_t size, void *cpu_addr, dma_addr_t handle, unsigned long attrs); -dma_addr_t iommu_dma_map_resource(struct device *dev, phys_addr_t phys, - size_t size, enum dma_data_direction dir, unsigned long attrs); -void iommu_dma_unmap_resource(struct device *dev, dma_addr_t handle, - size_t size, enum dma_data_direction dir, unsigned long attrs); struct sg_table *iommu_dma_alloc_noncontiguous(struct device *dev, size_t = size, enum dma_data_direction dir, gfp_t gfp, unsigned long attrs); void iommu_dma_free_noncontiguous(struct device *dev, size_t size, diff --git a/include/trace/events/dma.h b/include/trace/events/dma.h index 84416c7d6bfaa..5da59fd8121db 100644 --- a/include/trace/events/dma.h +++ b/include/trace/events/dma.h @@ -73,7 +73,6 @@ DEFINE_EVENT(dma_map, name, \ TP_ARGS(dev, phys_addr, dma_addr, size, dir, attrs)) =20 DEFINE_MAP_EVENT(dma_map_phys); -DEFINE_MAP_EVENT(dma_map_resource); =20 DECLARE_EVENT_CLASS(dma_unmap, TP_PROTO(struct device *dev, dma_addr_t addr, size_t size, @@ -111,7 +110,6 @@ DEFINE_EVENT(dma_unmap, name, \ TP_ARGS(dev, addr, size, dir, attrs)) =20 DEFINE_UNMAP_EVENT(dma_unmap_phys); -DEFINE_UNMAP_EVENT(dma_unmap_resource); =20 DECLARE_EVENT_CLASS(dma_alloc_class, TP_PROTO(struct device *dev, void *virt_addr, dma_addr_t dma_addr, diff --git a/kernel/dma/debug.c b/kernel/dma/debug.c index da6734e3a4ce9..06e31fd216e38 100644 --- a/kernel/dma/debug.c +++ b/kernel/dma/debug.c @@ -38,7 +38,6 @@ enum { dma_debug_single, dma_debug_sg, dma_debug_coherent, - dma_debug_resource, dma_debug_phy, }; =20 @@ -141,7 +140,6 @@ static const char *type2name[] =3D { [dma_debug_single] =3D "single", [dma_debug_sg] =3D "scatter-gather", [dma_debug_coherent] =3D "coherent", - [dma_debug_resource] =3D "resource", [dma_debug_phy] =3D "phy", }; =20 @@ -1448,47 +1446,6 @@ void debug_dma_free_coherent(struct device *dev, siz= e_t size, check_unmap(&ref); } =20 -void debug_dma_map_resource(struct device *dev, phys_addr_t addr, size_t s= ize, - int direction, dma_addr_t dma_addr, - unsigned long attrs) -{ - struct dma_debug_entry *entry; - - if (unlikely(dma_debug_disabled())) - return; - - entry =3D dma_entry_alloc(); - if (!entry) - return; - - entry->type =3D dma_debug_resource; - entry->dev =3D dev; - entry->paddr =3D addr; - entry->size =3D size; - entry->dev_addr =3D dma_addr; - entry->direction =3D direction; - entry->map_err_type =3D MAP_ERR_NOT_CHECKED; - - add_dma_entry(entry, attrs); -} - -void debug_dma_unmap_resource(struct device *dev, dma_addr_t dma_addr, - size_t size, int direction) -{ - struct dma_debug_entry ref =3D { - .type =3D dma_debug_resource, - .dev =3D dev, - .dev_addr =3D dma_addr, - .size =3D size, - .direction =3D direction, - }; - - if (unlikely(dma_debug_disabled())) - return; - - check_unmap(&ref); -} - void debug_dma_sync_single_for_cpu(struct device *dev, dma_addr_t dma_hand= le, size_t size, int direction) { diff --git a/kernel/dma/debug.h b/kernel/dma/debug.h index 76adb42bffd5f..424b8f912aded 100644 --- a/kernel/dma/debug.h +++ b/kernel/dma/debug.h @@ -30,14 +30,6 @@ extern void debug_dma_alloc_coherent(struct device *dev,= size_t size, extern void debug_dma_free_coherent(struct device *dev, size_t size, void *virt, dma_addr_t addr); =20 -extern void debug_dma_map_resource(struct device *dev, phys_addr_t addr, - size_t size, int direction, - dma_addr_t dma_addr, - unsigned long attrs); - -extern void debug_dma_unmap_resource(struct device *dev, dma_addr_t dma_ad= dr, - size_t size, int direction); - extern void debug_dma_sync_single_for_cpu(struct device *dev, dma_addr_t dma_handle, size_t size, int direction); @@ -88,19 +80,6 @@ static inline void debug_dma_free_coherent(struct device= *dev, size_t size, { } =20 -static inline void debug_dma_map_resource(struct device *dev, phys_addr_t = addr, - size_t size, int direction, - dma_addr_t dma_addr, - unsigned long attrs) -{ -} - -static inline void debug_dma_unmap_resource(struct device *dev, - dma_addr_t dma_addr, size_t size, - int direction) -{ -} - static inline void debug_dma_sync_single_for_cpu(struct device *dev, dma_addr_t dma_handle, size_t size, int direction) diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index fa75e30700730..1062caac47e7b 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -502,22 +502,6 @@ int dma_direct_map_sg(struct device *dev, struct scatt= erlist *sgl, int nents, return ret; } =20 -dma_addr_t dma_direct_map_resource(struct device *dev, phys_addr_t paddr, - size_t size, enum dma_data_direction dir, unsigned long attrs) -{ - dma_addr_t dma_addr =3D paddr; - - if (unlikely(!dma_capable(dev, dma_addr, size, false))) { - dev_err_once(dev, - "DMA addr %pad+%zu overflow (mask %llx, bus limit %llx).\n", - &dma_addr, size, *dev->dma_mask, dev->bus_dma_limit); - WARN_ON_ONCE(1); - return DMA_MAPPING_ERROR; - } - - return dma_addr; -} - int dma_direct_get_sgtable(struct device *dev, struct sg_table *sgt, void *cpu_addr, dma_addr_t dma_addr, size_t size, unsigned long attrs) diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c index f5f051737e556..b747794448130 100644 --- a/kernel/dma/mapping.c +++ b/kernel/dma/mapping.c @@ -152,12 +152,10 @@ static inline bool dma_map_direct(struct device *dev, return dma_go_direct(dev, *dev->dma_mask, ops); } =20 -dma_addr_t dma_map_page_attrs(struct device *dev, struct page *page, - size_t offset, size_t size, enum dma_data_direction dir, - unsigned long attrs) +dma_addr_t dma_map_phys(struct device *dev, phys_addr_t phys, size_t size, + enum dma_data_direction dir, unsigned long attrs) { const struct dma_map_ops *ops =3D get_dma_ops(dev); - phys_addr_t phys =3D page_to_phys(page) + offset; bool is_mmio =3D attrs & DMA_ATTR_MMIO; dma_addr_t addr; =20 @@ -177,6 +175,9 @@ dma_addr_t dma_map_page_attrs(struct device *dev, struc= t page *page, =20 addr =3D ops->map_resource(dev, phys, size, dir, attrs); } else { + struct page *page =3D phys_to_page(phys); + size_t offset =3D offset_in_page(phys); + /* * All platforms which implement .map_page() don't support * non-struct page backed addresses. @@ -190,9 +191,25 @@ dma_addr_t dma_map_page_attrs(struct device *dev, stru= ct page *page, =20 return addr; } +EXPORT_SYMBOL_GPL(dma_map_phys); + +dma_addr_t dma_map_page_attrs(struct device *dev, struct page *page, + size_t offset, size_t size, enum dma_data_direction dir, + unsigned long attrs) +{ + phys_addr_t phys =3D page_to_phys(page) + offset; + + if (unlikely(attrs & DMA_ATTR_MMIO)) + return DMA_MAPPING_ERROR; + + if (IS_ENABLED(CONFIG_DMA_API_DEBUG)) + WARN_ON_ONCE(!pfn_valid(PHYS_PFN(phys))); + + return dma_map_phys(dev, phys, size, dir, attrs); +} EXPORT_SYMBOL(dma_map_page_attrs); =20 -void dma_unmap_page_attrs(struct device *dev, dma_addr_t addr, size_t size, +void dma_unmap_phys(struct device *dev, dma_addr_t addr, size_t size, enum dma_data_direction dir, unsigned long attrs) { const struct dma_map_ops *ops =3D get_dma_ops(dev); @@ -212,6 +229,16 @@ void dma_unmap_page_attrs(struct device *dev, dma_addr= _t addr, size_t size, trace_dma_unmap_phys(dev, addr, size, dir, attrs); debug_dma_unmap_phys(dev, addr, size, dir); } +EXPORT_SYMBOL_GPL(dma_unmap_phys); + +void dma_unmap_page_attrs(struct device *dev, dma_addr_t addr, size_t size, + enum dma_data_direction dir, unsigned long attrs) +{ + if (unlikely(attrs & DMA_ATTR_MMIO)) + return; + + dma_unmap_phys(dev, addr, size, dir, attrs); +} EXPORT_SYMBOL(dma_unmap_page_attrs); =20 static int __dma_map_sg_attrs(struct device *dev, struct scatterlist *sg, @@ -337,41 +364,18 @@ EXPORT_SYMBOL(dma_unmap_sg_attrs); dma_addr_t dma_map_resource(struct device *dev, phys_addr_t phys_addr, size_t size, enum dma_data_direction dir, unsigned long attrs) { - const struct dma_map_ops *ops =3D get_dma_ops(dev); - dma_addr_t addr =3D DMA_MAPPING_ERROR; - - BUG_ON(!valid_dma_direction(dir)); - - if (WARN_ON_ONCE(!dev->dma_mask)) + if (IS_ENABLED(CONFIG_DMA_API_DEBUG) && + WARN_ON_ONCE(pfn_valid(PHYS_PFN(phys_addr)))) return DMA_MAPPING_ERROR; =20 - if (dma_map_direct(dev, ops)) - addr =3D dma_direct_map_resource(dev, phys_addr, size, dir, attrs); - else if (use_dma_iommu(dev)) - addr =3D iommu_dma_map_resource(dev, phys_addr, size, dir, attrs); - else if (ops->map_resource) - addr =3D ops->map_resource(dev, phys_addr, size, dir, attrs); - - trace_dma_map_resource(dev, phys_addr, addr, size, dir, attrs); - debug_dma_map_resource(dev, phys_addr, size, dir, addr, attrs); - return addr; + return dma_map_phys(dev, phys_addr, size, dir, attrs | DMA_ATTR_MMIO); } EXPORT_SYMBOL(dma_map_resource); =20 void dma_unmap_resource(struct device *dev, dma_addr_t addr, size_t size, enum dma_data_direction dir, unsigned long attrs) { - const struct dma_map_ops *ops =3D get_dma_ops(dev); - - BUG_ON(!valid_dma_direction(dir)); - if (dma_map_direct(dev, ops)) - ; /* nothing to do: uncached and no swiotlb */ - else if (use_dma_iommu(dev)) - iommu_dma_unmap_resource(dev, addr, size, dir, attrs); - else if (ops->unmap_resource) - ops->unmap_resource(dev, addr, size, dir, attrs); - trace_dma_unmap_resource(dev, addr, size, dir, attrs); - debug_dma_unmap_resource(dev, addr, size, dir); + dma_unmap_phys(dev, addr, size, dir, attrs | DMA_ATTR_MMIO); } EXPORT_SYMBOL(dma_unmap_resource); =20 --=20 2.50.1 From nobody Sun Oct 5 12:46:17 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=quarantine dis=none) header.from=kernel.org ARC-Seal: i=1; a=rsa-sha256; t=1754311905; cv=none; d=zohomail.com; s=zohoarc; b=b5pYnlT5EAfZFZGbdNzWZ6vmUM03tDewgTMNfkp5WW8J5mrbAwEJ+oqmUCMRnwmeWiddXeDgTwVNwTtmumoI8ciC9S8qiNzDlZJD8625e/RXki5Nwh5KWf+G/QZlrtRb16S9unq6M7MQg0Whit+zBL7FHiWK9K6vwQE4LP9cgaM= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1754311905; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=ZQkh9YNjcmK4oEE8V4mODzcz3fsK3O/5hMC+kUDPf80=; b=b3nNfy8yyGR7mIKUWr5D3O7JaI0CTqmMRAFB0Z4mz0j++ZqO9DUT2Idxv8iNmvZCcDZMGSVGnldpfptz1PKWqWM4p2iBx8R4K3MFCTD12UzNDTA6k8jBqSj+BHrlWzkaZrRbKnbuyAFQ4R33izV1sDI+qPgMriz0AXfRYVzC/P8= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=quarantine dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1754311905147138.59342849074665; Mon, 4 Aug 2025 05:51:45 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.1069224.1433115 (Exim 4.92) (envelope-from ) id 1uiuf7-00056V-AZ; Mon, 04 Aug 2025 12:51:33 +0000 Received: by outflank-mailman (output) from mailman id 1069224.1433115; Mon, 04 Aug 2025 12:51:33 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1uiuf7-00056M-64; Mon, 04 Aug 2025 12:51:33 +0000 Received: by outflank-mailman (input) for mailman id 1069224; Mon, 04 Aug 2025 12:51:31 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1uiuY1-0006V7-B6 for xen-devel@lists.xenproject.org; Mon, 04 Aug 2025 12:44:13 +0000 Received: from nyc.source.kernel.org (nyc.source.kernel.org [2604:1380:45d1:ec00::3]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id b7812cf0-7130-11f0-b898-0df219b8e170; Mon, 04 Aug 2025 14:44:10 +0200 (CEST) Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by nyc.source.kernel.org (Postfix) with ESMTP id B24ACA55823; Mon, 4 Aug 2025 12:44:09 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 41354C4CEE7; Mon, 4 Aug 2025 12:44:08 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: b7812cf0-7130-11f0-b898-0df219b8e170 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1754311449; bh=71zLK4izXOep/KpMivpRm0s6UWtEl8Sf4NUkNJ/aIsU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=jbu+qcbqIUowFpCDDbVzU+O+oDab7hCwuZ0M/xHzIA/m0ruu3KAVq7ylLBINY55If bo9Tk+feiAbmQkF4YUkUgIp0MAGQcmiQI/Mcg1NkiXwovDTkgHpeGsJGSuvqbW4nNb 6fzLtf+WmA+A9sS3/Z9a3whJQW3+eizW1avVzK6HdjeWo/DTN9Zhf652jC0OiVqnIx daQ4NzeFJg9jNTgGGI/AIUyWbdPkZPaelF6ZKtGOEKFXm1v4YI2nWf6GfMuCj7zPG3 LWho0rhThFQGEnh+1T8yylTXAtdwgRYiF8Q52vne/vV6b0+WWqLNv/LW3uBlxR2xc1 8NgN+JwChrndA== From: Leon Romanovsky To: Marek Szyprowski Cc: Leon Romanovsky , Jason Gunthorpe , Abdiel Janulgue , Alexander Potapenko , Alex Gaynor , Andrew Morton , Christoph Hellwig , Danilo Krummrich , iommu@lists.linux.dev, Jason Wang , Jens Axboe , Joerg Roedel , Jonathan Corbet , Juergen Gross , kasan-dev@googlegroups.com, Keith Busch , linux-block@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-nvme@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-trace-kernel@vger.kernel.org, Madhavan Srinivasan , Masami Hiramatsu , Michael Ellerman , "Michael S. Tsirkin" , Miguel Ojeda , Robin Murphy , rust-for-linux@vger.kernel.org, Sagi Grimberg , Stefano Stabellini , Steven Rostedt , virtualization@lists.linux.dev, Will Deacon , xen-devel@lists.xenproject.org Subject: [PATCH v1 12/16] mm/hmm: migrate to physical address-based DMA mapping API Date: Mon, 4 Aug 2025 15:42:46 +0300 Message-ID: <6d5896c3c1eb4d481b7d49f1eb661f61353bcfdb.1754292567.git.leon@kernel.org> X-Mailer: git-send-email 2.50.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @kernel.org) X-ZM-MESSAGEID: 1754311907630116600 Content-Type: text/plain; charset="utf-8" From: Leon Romanovsky Convert HMM DMA operations from the legacy page-based API to the new physical address-based dma_map_phys() and dma_unmap_phys() functions. This demonstrates the preferred approach for new code that should use physical addresses directly rather than page+offset parameters. The change replaces dma_map_page() and dma_unmap_page() calls with dma_map_phys() and dma_unmap_phys() respectively, using the physical address that was already available in the code. This eliminates the redundant page-to-physical address conversion and aligns with the DMA subsystem's move toward physical address-centric interfaces. This serves as an example of how new code should be written to leverage the more efficient physical address API, which provides cleaner interfaces for drivers that already have access to physical addresses. Signed-off-by: Leon Romanovsky Reviewed-by: Jason Gunthorpe --- mm/hmm.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/mm/hmm.c b/mm/hmm.c index d545e24949949..015ab243f0813 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -775,8 +775,8 @@ dma_addr_t hmm_dma_map_pfn(struct device *dev, struct h= mm_dma_map *map, if (WARN_ON_ONCE(dma_need_unmap(dev) && !dma_addrs)) goto error; =20 - dma_addr =3D dma_map_page(dev, page, 0, map->dma_entry_size, - DMA_BIDIRECTIONAL); + dma_addr =3D dma_map_phys(dev, paddr, map->dma_entry_size, + DMA_BIDIRECTIONAL, 0); if (dma_mapping_error(dev, dma_addr)) goto error; =20 @@ -819,8 +819,8 @@ bool hmm_dma_unmap_pfn(struct device *dev, struct hmm_d= ma_map *map, size_t idx) dma_iova_unlink(dev, state, idx * map->dma_entry_size, map->dma_entry_size, DMA_BIDIRECTIONAL, attrs); } else if (dma_need_unmap(dev)) - dma_unmap_page(dev, dma_addrs[idx], map->dma_entry_size, - DMA_BIDIRECTIONAL); + dma_unmap_phys(dev, dma_addrs[idx], map->dma_entry_size, + DMA_BIDIRECTIONAL, 0); =20 pfns[idx] &=3D ~(HMM_PFN_DMA_MAPPED | HMM_PFN_P2PDMA | HMM_PFN_P2PDMA_BUS); --=20 2.50.1 From nobody Sun Oct 5 12:46:17 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=quarantine dis=none) header.from=kernel.org ARC-Seal: i=1; a=rsa-sha256; t=1754311921; cv=none; d=zohomail.com; s=zohoarc; b=OUon0D51XjmJJaLvcLIX9M8zxYNYBBCdyM3ty16808xcPsoMBiqqSV7xRc3ccy7AKwS78Ta7OyE/dSyNvNkGWtt9qLrO6EXpM+ALYlP+zI95HEgpARmzYdjdcQ52NAxaDbc0F4S/TnzX3ZWMF/mTtIK2MIa0n2xrC43geyxCv70= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1754311921; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=+vAkscLUbC2ztTvQppga8PLzPNKX2Zo6YASdCGH79rc=; b=NDhujJvMD5siNSbPR8edXfTTRknxnyx+sxNfNs6LKuK3vvDLzh5tMEG1IN/RrJbvLr9z1jI9q1BaBuCkwxmIuKlv+Ok/lYc8UNjUz8j7OEoHSpySgtXn1JxyvofAAUfpBJIxHCTzre2lP+/yEa49XflPU7GSXCQi/TQxKHWGW0g= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=quarantine dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1754311921484728.4697035000986; Mon, 4 Aug 2025 05:52:01 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.1069246.1433126 (Exim 4.92) (envelope-from ) id 1uiufN-00069a-KQ; Mon, 04 Aug 2025 12:51:49 +0000 Received: by outflank-mailman (output) from mailman id 1069246.1433126; Mon, 04 Aug 2025 12:51:49 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1uiufN-00068j-Gl; Mon, 04 Aug 2025 12:51:49 +0000 Received: by outflank-mailman (input) for mailman id 1069246; Mon, 04 Aug 2025 12:51:48 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1uiuY9-0006VD-Og for xen-devel@lists.xenproject.org; Mon, 04 Aug 2025 12:44:21 +0000 Received: from dfw.source.kernel.org (dfw.source.kernel.org [2604:1380:4641:c500::1]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id bd6de160-7130-11f0-a321-13f23c93f187; Mon, 04 Aug 2025 14:44:20 +0200 (CEST) Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 1404A5C5F2E; Mon, 4 Aug 2025 12:44:19 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id D4569C4CEE7; Mon, 4 Aug 2025 12:44:17 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: bd6de160-7130-11f0-a321-13f23c93f187 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1754311458; bh=dqwhbGWVl/oipogVPAPlJyJScmypxjmriuJlNMDqMFM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=twziXknBPqQkaMb4B7FpNHJ9HEUx7/M836/B/HOkWAaI0xU18AOGgranE1UPO3Z2A qByqLoklhRYu6bnu+x5ETVJIc+P+0Ju1mUTlWesgsxm9Uw2nnAUHgWFFcvipLZWsPr /X/Xgc0MrjwXWJE2eE8R2nPmVwtK2NoWOETrJzuVA7cF2pqqQw7TPbh3FX3McEN3on SLv84u5z+pYu1j0JRSH5twhBbzvn9nXehFR206rnivCVd5lTNcN4IpS/pX3K4gEYJR R30yvtOo8vlSCBPCeQNV8Tbj18sYkV187LjgCJFMSIphAwqv5UYBAvrh/Sqr87dLeL XA2ufQWKM0AXg== From: Leon Romanovsky To: Marek Szyprowski Cc: Leon Romanovsky , Jason Gunthorpe , Abdiel Janulgue , Alexander Potapenko , Alex Gaynor , Andrew Morton , Christoph Hellwig , Danilo Krummrich , iommu@lists.linux.dev, Jason Wang , Jens Axboe , Joerg Roedel , Jonathan Corbet , Juergen Gross , kasan-dev@googlegroups.com, Keith Busch , linux-block@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-nvme@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-trace-kernel@vger.kernel.org, Madhavan Srinivasan , Masami Hiramatsu , Michael Ellerman , "Michael S. Tsirkin" , Miguel Ojeda , Robin Murphy , rust-for-linux@vger.kernel.org, Sagi Grimberg , Stefano Stabellini , Steven Rostedt , virtualization@lists.linux.dev, Will Deacon , xen-devel@lists.xenproject.org Subject: [PATCH v1 13/16] mm/hmm: properly take MMIO path Date: Mon, 4 Aug 2025 15:42:47 +0300 Message-ID: <79cf36301cc05d6dd1c88e9c3812ac5c3f57e32b.1754292567.git.leon@kernel.org> X-Mailer: git-send-email 2.50.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @kernel.org) X-ZM-MESSAGEID: 1754311923155116600 Content-Type: text/plain; charset="utf-8" From: Leon Romanovsky In case peer-to-peer transaction traverses through host bridge, the IOMMU needs to have IOMMU_MMIO flag, together with skip of CPU sync. The latter was handled by provided DMA_ATTR_SKIP_CPU_SYNC flag, but IOMMU flag was missed, due to assumption that such memory can be treated as regular one. Reuse newly introduced DMA attribute to properly take MMIO path. Signed-off-by: Leon Romanovsky Reviewed-by: Jason Gunthorpe --- mm/hmm.c | 15 ++++++++------- 1 file changed, 8 insertions(+), 7 deletions(-) diff --git a/mm/hmm.c b/mm/hmm.c index 015ab243f0813..6556c0e074ba8 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -746,7 +746,7 @@ dma_addr_t hmm_dma_map_pfn(struct device *dev, struct h= mm_dma_map *map, case PCI_P2PDMA_MAP_NONE: break; case PCI_P2PDMA_MAP_THRU_HOST_BRIDGE: - attrs |=3D DMA_ATTR_SKIP_CPU_SYNC; + attrs |=3D DMA_ATTR_MMIO; pfns[idx] |=3D HMM_PFN_P2PDMA; break; case PCI_P2PDMA_MAP_BUS_ADDR: @@ -776,7 +776,7 @@ dma_addr_t hmm_dma_map_pfn(struct device *dev, struct h= mm_dma_map *map, goto error; =20 dma_addr =3D dma_map_phys(dev, paddr, map->dma_entry_size, - DMA_BIDIRECTIONAL, 0); + DMA_BIDIRECTIONAL, attrs); if (dma_mapping_error(dev, dma_addr)) goto error; =20 @@ -811,16 +811,17 @@ bool hmm_dma_unmap_pfn(struct device *dev, struct hmm= _dma_map *map, size_t idx) if ((pfns[idx] & valid_dma) !=3D valid_dma) return false; =20 + if (pfns[idx] & HMM_PFN_P2PDMA) + attrs |=3D DMA_ATTR_MMIO; + if (pfns[idx] & HMM_PFN_P2PDMA_BUS) ; /* no need to unmap bus address P2P mappings */ - else if (dma_use_iova(state)) { - if (pfns[idx] & HMM_PFN_P2PDMA) - attrs |=3D DMA_ATTR_SKIP_CPU_SYNC; + else if (dma_use_iova(state)) dma_iova_unlink(dev, state, idx * map->dma_entry_size, map->dma_entry_size, DMA_BIDIRECTIONAL, attrs); - } else if (dma_need_unmap(dev)) + else if (dma_need_unmap(dev)) dma_unmap_phys(dev, dma_addrs[idx], map->dma_entry_size, - DMA_BIDIRECTIONAL, 0); + DMA_BIDIRECTIONAL, attrs); =20 pfns[idx] &=3D ~(HMM_PFN_DMA_MAPPED | HMM_PFN_P2PDMA | HMM_PFN_P2PDMA_BUS); --=20 2.50.1 From nobody Sun Oct 5 12:46:17 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=quarantine dis=none) header.from=kernel.org ARC-Seal: i=1; a=rsa-sha256; t=1754311887; cv=none; d=zohomail.com; s=zohoarc; b=VQ+lP3l8CaMa0lW3DXLW19KTgJbKuKUtpV+8+JxvVHHQ5VJW27eeO6MeCsbrvLsSoq/apzinOOShiML2I0vIa/O3HCm5HEMZtrWCVo5JrC5sru5gQmu31rWl/BFIKt3+PvvVY9zxP+OLzsoi/kZoEEfVFK02Qv55Gmf+8XC5K2s= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1754311887; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=VKxnKLgBXd3JK+l9/qE4iGFIXvnOntNCGlyh9TtKTfs=; b=UxBmaRRsiW3cchs7BAPtq3q33ZlAwu0denfjgbdDx25wRr0a0My93t2eQQXZWSmqIIZXPXHSGbLgr3eOigMXnkRciuckQR3fgB3ioB31XzAxfQKsRtaM5XcG7STb+5qj0KsfIVgOmtc8KpUYhvbPgTIXjOg6h7K+I0yozYYibQQ= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=quarantine dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1754311887647418.24189118402114; Mon, 4 Aug 2025 05:51:27 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.1069213.1433076 (Exim 4.92) (envelope-from ) id 1uiuer-0003kg-49; Mon, 04 Aug 2025 12:51:17 +0000 Received: by outflank-mailman (output) from mailman id 1069213.1433076; Mon, 04 Aug 2025 12:51:17 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1uiuer-0003kZ-0f; Mon, 04 Aug 2025 12:51:17 +0000 Received: by outflank-mailman (input) for mailman id 1069213; Mon, 04 Aug 2025 12:51:15 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1uiuYE-0006V7-OZ for xen-devel@lists.xenproject.org; Mon, 04 Aug 2025 12:44:26 +0000 Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id bfe031d2-7130-11f0-b898-0df219b8e170; Mon, 04 Aug 2025 14:44:24 +0200 (CEST) Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by dfw.source.kernel.org (Postfix) with ESMTP id 91E5C5C5F4C; Mon, 4 Aug 2025 12:44:23 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5D115C4CEE7; Mon, 4 Aug 2025 12:44:22 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: bfe031d2-7130-11f0-b898-0df219b8e170 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1754311463; bh=vHoVz+WsFzZyOWz8hKi3gaHzcDQnMM6K5YlppZB3S5Q=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=CEWxy3DQJEv8tmWrU60JGu2vy2JC6Vy5gQ7dkMLe3WhDD80GhgtaPSi2T4O+uh/qX dzCLBAG639t4BdY7iOVuiz/s1KjNeAuSDkQxAswKoB8Dlog8b4SWYXyCwMpjyJfJs/ N598nGgARAwbbzCyY7NB50l2J7JGzBArXzyzEB0gXQ+Arna+DrOW+E2PsAsdgJv7Pa Xqf4bgAaoc0j7DzDpKnhtwqRE6JormvtT5/kXa44BjUbPKvgizXbhqf9ZKiiePFStu qAJ3Rs9QlPzNZq5SwsxLYx3N0dkWN+PMHiDWxME92RHKujqBYmsalcvoVTj85/jbJK SRTlNuuOgXchw== From: Leon Romanovsky To: Marek Szyprowski Cc: Leon Romanovsky , Jason Gunthorpe , Abdiel Janulgue , Alexander Potapenko , Alex Gaynor , Andrew Morton , Christoph Hellwig , Danilo Krummrich , iommu@lists.linux.dev, Jason Wang , Jens Axboe , Joerg Roedel , Jonathan Corbet , Juergen Gross , kasan-dev@googlegroups.com, Keith Busch , linux-block@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-nvme@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-trace-kernel@vger.kernel.org, Madhavan Srinivasan , Masami Hiramatsu , Michael Ellerman , "Michael S. Tsirkin" , Miguel Ojeda , Robin Murphy , rust-for-linux@vger.kernel.org, Sagi Grimberg , Stefano Stabellini , Steven Rostedt , virtualization@lists.linux.dev, Will Deacon , xen-devel@lists.xenproject.org Subject: [PATCH v1 14/16] block-dma: migrate to dma_map_phys instead of map_page Date: Mon, 4 Aug 2025 15:42:48 +0300 Message-ID: <9b8454a8a24ace186a22242e218e4f4fed103fdd.1754292567.git.leon@kernel.org> X-Mailer: git-send-email 2.50.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @kernel.org) X-ZM-MESSAGEID: 1754311888987116600 Content-Type: text/plain; charset="utf-8" From: Leon Romanovsky After introduction of dma_map_phys(), there is no need to convert from physical address to struct page in order to map page. So let's use it directly. Signed-off-by: Leon Romanovsky --- block/blk-mq-dma.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/block/blk-mq-dma.c b/block/blk-mq-dma.c index ad283017caef2..37e2142be4f7d 100644 --- a/block/blk-mq-dma.c +++ b/block/blk-mq-dma.c @@ -87,8 +87,8 @@ static bool blk_dma_map_bus(struct blk_dma_iter *iter, st= ruct phys_vec *vec) static bool blk_dma_map_direct(struct request *req, struct device *dma_dev, struct blk_dma_iter *iter, struct phys_vec *vec) { - iter->addr =3D dma_map_page(dma_dev, phys_to_page(vec->paddr), - offset_in_page(vec->paddr), vec->len, rq_dma_dir(req)); + iter->addr =3D dma_map_phys(dma_dev, vec->paddr, vec->len, + rq_dma_dir(req), 0); if (dma_mapping_error(dma_dev, iter->addr)) { iter->status =3D BLK_STS_RESOURCE; return false; --=20 2.50.1 From nobody Sun Oct 5 12:46:17 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=quarantine dis=none) header.from=kernel.org ARC-Seal: i=1; a=rsa-sha256; t=1754311977; cv=none; d=zohomail.com; s=zohoarc; b=YT0s5UIe7mVxTIrTT+kniYYWsegA2+E4iqAllbPuFhzxHXmqyfiB6MZYGOcXZoVOppi+nT/9GjOgEZGh24M2o9Kp2vwisD31PJyqPFOhbMk4hvhivRSUClK8B4TbYINpdNOluqHqv4l8H4rHjB8HSubAyAim+kIcJCivsAmpeF0= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1754311977; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=rR472IjdMKe4w8RfO8gWCXyzwltO7zaZbCaEciZCZH8=; b=RyCw4HaHFVsRz0MJ/SfZWxdXGl8U/f5L7LBS4ufLbNLnkn98aGwFMY8XSepmkayuBt2A+mUDlWrPMKGkUoc5yloKZEw1Y3LWa6ujN2hw67TeV0FTV9ENVH2RQamvKHdlmD6/165kjzNjO0765J2MT6xj7yoqqqkIWwikOneXdfs= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=quarantine dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1754311977534533.2584087392343; Mon, 4 Aug 2025 05:52:57 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.1069273.1433145 (Exim 4.92) (envelope-from ) id 1uiugG-00080F-AU; Mon, 04 Aug 2025 12:52:44 +0000 Received: by outflank-mailman (output) from mailman id 1069273.1433145; Mon, 04 Aug 2025 12:52:44 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1uiugG-000808-7V; Mon, 04 Aug 2025 12:52:44 +0000 Received: by outflank-mailman (input) for mailman id 1069273; Mon, 04 Aug 2025 12:52:42 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1uiuYM-0006VD-En for xen-devel@lists.xenproject.org; Mon, 04 Aug 2025 12:44:34 +0000 Received: from nyc.source.kernel.org (nyc.source.kernel.org [147.75.193.91]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id c5682827-7130-11f0-a321-13f23c93f187; Mon, 04 Aug 2025 14:44:33 +0200 (CEST) Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by nyc.source.kernel.org (Postfix) with ESMTP id 10CE7A5586C; Mon, 4 Aug 2025 12:44:33 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id DBBA4C4CEE7; Mon, 4 Aug 2025 12:44:31 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: c5682827-7130-11f0-a321-13f23c93f187 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1754311472; bh=Aje2MY5jLtJJ+XC6L21HodqK1MreNzlcX3KM+Qptt24=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=D+O9BFW9G3EUN02UNr+9W1wRCdRDfRkf0x3nKVoSKF3qgFSYQkjSJdutDyO+/GaXc LGLcj33SDyJqVULCEM37MXdFVYsGALxSAZY0nHbzSQZTnFerxjcv8C6TboBXWH4naX 6ZVZnkDzbfO3eKo22J8zcBZLzmo4T9kxSkmi7f07Z+UGaRwrQIMopajJOg4vYfLx6+ jnSQUAfHfWhVCjNaPp3bPAtogKMoVHj2Hvn4dfigZthAzfffhOXJp1Yoe3nvVjUnC/ ShnNJR9ewOfC+QlOtWiwtM5Xx2GVbLWurX258r+SmOzIeAJTmd4uYs98RHNcdlwqdj Y4q80UzvzmD6g== From: Leon Romanovsky To: Marek Szyprowski Cc: Leon Romanovsky , Jason Gunthorpe , Abdiel Janulgue , Alexander Potapenko , Alex Gaynor , Andrew Morton , Christoph Hellwig , Danilo Krummrich , iommu@lists.linux.dev, Jason Wang , Jens Axboe , Joerg Roedel , Jonathan Corbet , Juergen Gross , kasan-dev@googlegroups.com, Keith Busch , linux-block@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-nvme@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-trace-kernel@vger.kernel.org, Madhavan Srinivasan , Masami Hiramatsu , Michael Ellerman , "Michael S. Tsirkin" , Miguel Ojeda , Robin Murphy , rust-for-linux@vger.kernel.org, Sagi Grimberg , Stefano Stabellini , Steven Rostedt , virtualization@lists.linux.dev, Will Deacon , xen-devel@lists.xenproject.org Subject: [PATCH v1 15/16] block-dma: properly take MMIO path Date: Mon, 4 Aug 2025 15:42:49 +0300 Message-ID: X-Mailer: git-send-email 2.50.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @kernel.org) X-ZM-MESSAGEID: 1754311979766116600 Content-Type: text/plain; charset="utf-8" From: Leon Romanovsky Make sure that CPU is not synced and IOMMU is configured to take MMIO path by providing newly introduced DMA_ATTR_MMIO attribute. Signed-off-by: Leon Romanovsky --- block/blk-mq-dma.c | 13 +++++++++++-- include/linux/blk-mq-dma.h | 6 +++++- include/linux/blk_types.h | 2 ++ 3 files changed, 18 insertions(+), 3 deletions(-) diff --git a/block/blk-mq-dma.c b/block/blk-mq-dma.c index 37e2142be4f7d..d415088ed9fd2 100644 --- a/block/blk-mq-dma.c +++ b/block/blk-mq-dma.c @@ -87,8 +87,13 @@ static bool blk_dma_map_bus(struct blk_dma_iter *iter, s= truct phys_vec *vec) static bool blk_dma_map_direct(struct request *req, struct device *dma_dev, struct blk_dma_iter *iter, struct phys_vec *vec) { + unsigned int attrs =3D 0; + + if (req->cmd_flags & REQ_MMIO) + attrs =3D DMA_ATTR_MMIO; + iter->addr =3D dma_map_phys(dma_dev, vec->paddr, vec->len, - rq_dma_dir(req), 0); + rq_dma_dir(req), attrs); if (dma_mapping_error(dma_dev, iter->addr)) { iter->status =3D BLK_STS_RESOURCE; return false; @@ -103,14 +108,17 @@ static bool blk_rq_dma_map_iova(struct request *req, = struct device *dma_dev, { enum dma_data_direction dir =3D rq_dma_dir(req); unsigned int mapped =3D 0; + unsigned int attrs =3D 0; int error; =20 iter->addr =3D state->addr; iter->len =3D dma_iova_size(state); + if (req->cmd_flags & REQ_MMIO) + attrs =3D DMA_ATTR_MMIO; =20 do { error =3D dma_iova_link(dma_dev, state, vec->paddr, mapped, - vec->len, dir, 0); + vec->len, dir, attrs); if (error) break; mapped +=3D vec->len; @@ -176,6 +184,7 @@ bool blk_rq_dma_map_iter_start(struct request *req, str= uct device *dma_dev, * same as non-P2P transfers below and during unmap. */ req->cmd_flags &=3D ~REQ_P2PDMA; + req->cmd_flags |=3D REQ_MMIO; break; default: iter->status =3D BLK_STS_INVAL; diff --git a/include/linux/blk-mq-dma.h b/include/linux/blk-mq-dma.h index c26a01aeae006..6c55f5e585116 100644 --- a/include/linux/blk-mq-dma.h +++ b/include/linux/blk-mq-dma.h @@ -48,12 +48,16 @@ static inline bool blk_rq_dma_map_coalesce(struct dma_i= ova_state *state) static inline bool blk_rq_dma_unmap(struct request *req, struct device *dm= a_dev, struct dma_iova_state *state, size_t mapped_len) { + unsigned int attrs =3D 0; + if (req->cmd_flags & REQ_P2PDMA) return true; =20 if (dma_use_iova(state)) { + if (req->cmd_flags & REQ_MMIO) + attrs =3D DMA_ATTR_MMIO; dma_iova_destroy(dma_dev, state, mapped_len, rq_dma_dir(req), - 0); + attrs); return true; } =20 diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h index 09b99d52fd365..283058bcb5b14 100644 --- a/include/linux/blk_types.h +++ b/include/linux/blk_types.h @@ -387,6 +387,7 @@ enum req_flag_bits { __REQ_FS_PRIVATE, /* for file system (submitter) use */ __REQ_ATOMIC, /* for atomic write operations */ __REQ_P2PDMA, /* contains P2P DMA pages */ + __REQ_MMIO, /* contains MMIO memory */ /* * Command specific flags, keep last: */ @@ -420,6 +421,7 @@ enum req_flag_bits { #define REQ_FS_PRIVATE (__force blk_opf_t)(1ULL << __REQ_FS_PRIVATE) #define REQ_ATOMIC (__force blk_opf_t)(1ULL << __REQ_ATOMIC) #define REQ_P2PDMA (__force blk_opf_t)(1ULL << __REQ_P2PDMA) +#define REQ_MMIO (__force blk_opf_t)(1ULL << __REQ_MMIO) =20 #define REQ_NOUNMAP (__force blk_opf_t)(1ULL << __REQ_NOUNMAP) =20 --=20 2.50.1 From nobody Sun Oct 5 12:46:17 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=quarantine dis=none) header.from=kernel.org ARC-Seal: i=1; a=rsa-sha256; t=1754311989; cv=none; d=zohomail.com; s=zohoarc; b=ae9/MECoKFOzOFJERApxOVJE9N2dno2yufzSXTHoIvQNO8u8V1CgCwYxJHJ4gQBqMHANdYj6o6u4/VWIcQ/H67MJbR93hnBQYNPRFVrEmFGuv/UVY/Pp5jbElhOpCODDAOHhRAnCx5z7B7r086jPq/MTobIB1aRv7O7j9vWRcX0= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1754311989; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=TkPETycZvdzl425XaQeEjwXwnOxHyWl7Qf8kQUNymVE=; b=lpMNvyLcBPgI8rn03h1czSBzc/c1nZavWuV2KeNd2wCPWjD19eF8X66geopQYikY2uEUHaaq0jGfaa8wrpKlLR45mUO70RhoPyKzvSuWqbjoo1lXK5300bQfFd5kg7ly5fUBpZVBRlHIzfGyoroJPg0xH+L6iCgWfKXK9Dx1vWo= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=quarantine dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1754311989765998.3583072789957; Mon, 4 Aug 2025 05:53:09 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.1069277.1433156 (Exim 4.92) (envelope-from ) id 1uiugW-0008U7-JI; Mon, 04 Aug 2025 12:53:00 +0000 Received: by outflank-mailman (output) from mailman id 1069277.1433156; Mon, 04 Aug 2025 12:53:00 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1uiugW-0008Ty-FE; Mon, 04 Aug 2025 12:53:00 +0000 Received: by outflank-mailman (input) for mailman id 1069277; Mon, 04 Aug 2025 12:52:58 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1uiuYH-0006VD-Vv for xen-devel@lists.xenproject.org; Mon, 04 Aug 2025 12:44:29 +0000 Received: from nyc.source.kernel.org (nyc.source.kernel.org [147.75.193.91]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id c2bd95cf-7130-11f0-a321-13f23c93f187; Mon, 04 Aug 2025 14:44:29 +0200 (CEST) Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by nyc.source.kernel.org (Postfix) with ESMTP id 91E81A55869; Mon, 4 Aug 2025 12:44:28 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 16A0CC4CEE7; Mon, 4 Aug 2025 12:44:27 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: c2bd95cf-7130-11f0-a321-13f23c93f187 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1754311468; bh=6qGcHzBt0dZoqM7CuRvm64zWhofh7owUAL+fgH12bgM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=QrNeSbr2ETjZG5USk2ip6Q5L+o73xniNPTNqJfZmeQ0KuyCuSNG0LJiusWyclzjKr cFFoKUr822PG8QqKCLTd5BeFqtuT5Wy83EgwLas9lgxaOVGp8D6uZ1DZl2tq1dbKx/ L+omTL0ofNOzbXoE9fyZGuqE2CSvotiwsuOKLhXV08+L7k2MaQEDPsb6OddrmQRn4z PSjtn3F5+SFZ4HUEL0cLcskl4nbzTgDtlLNFq79lO3atdeVNqKCn/0JzVgj9R/u+Y+ m0zCkfJHRbZiqlPJoau9al8Hd9lK5yTnnQg10fAI/hDh0sKrsDMMIbG+18uzswU+JQ 8ketKVA4tFFWQ== From: Leon Romanovsky To: Marek Szyprowski Cc: Leon Romanovsky , Jason Gunthorpe , Abdiel Janulgue , Alexander Potapenko , Alex Gaynor , Andrew Morton , Christoph Hellwig , Danilo Krummrich , iommu@lists.linux.dev, Jason Wang , Jens Axboe , Joerg Roedel , Jonathan Corbet , Juergen Gross , kasan-dev@googlegroups.com, Keith Busch , linux-block@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-nvme@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-trace-kernel@vger.kernel.org, Madhavan Srinivasan , Masami Hiramatsu , Michael Ellerman , "Michael S. Tsirkin" , Miguel Ojeda , Robin Murphy , rust-for-linux@vger.kernel.org, Sagi Grimberg , Stefano Stabellini , Steven Rostedt , virtualization@lists.linux.dev, Will Deacon , xen-devel@lists.xenproject.org Subject: [PATCH v1 16/16] nvme-pci: unmap MMIO pages with appropriate interface Date: Mon, 4 Aug 2025 15:42:50 +0300 Message-ID: <5b0131f82a3d14acaa85f0d1dd608d2913af84e2.1754292567.git.leon@kernel.org> X-Mailer: git-send-email 2.50.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @kernel.org) X-ZM-MESSAGEID: 1754311991988124100 Content-Type: text/plain; charset="utf-8" From: Leon Romanovsky Block layer maps MMIO memory through dma_map_phys() interface with help of DMA_ATTR_MMIO attribute. There is a need to unmap that memory with the appropriate unmap function. Signed-off-by: Leon Romanovsky --- drivers/nvme/host/pci.c | 18 +++++++++++++----- 1 file changed, 13 insertions(+), 5 deletions(-) diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c index 071efec25346f..0b624247948c5 100644 --- a/drivers/nvme/host/pci.c +++ b/drivers/nvme/host/pci.c @@ -682,11 +682,15 @@ static void nvme_free_prps(struct request *req) { struct nvme_iod *iod =3D blk_mq_rq_to_pdu(req); struct nvme_queue *nvmeq =3D req->mq_hctx->driver_data; + unsigned int attrs =3D 0; unsigned int i; =20 + if (req->cmd_flags & REQ_MMIO) + attrs =3D DMA_ATTR_MMIO; + for (i =3D 0; i < iod->nr_dma_vecs; i++) - dma_unmap_page(nvmeq->dev->dev, iod->dma_vecs[i].addr, - iod->dma_vecs[i].len, rq_dma_dir(req)); + dma_unmap_phys(nvmeq->dev->dev, iod->dma_vecs[i].addr, + iod->dma_vecs[i].len, rq_dma_dir(req), attrs); mempool_free(iod->dma_vecs, nvmeq->dev->dmavec_mempool); } =20 @@ -699,15 +703,19 @@ static void nvme_free_sgls(struct request *req) unsigned int sqe_dma_len =3D le32_to_cpu(iod->cmd.common.dptr.sgl.length); struct nvme_sgl_desc *sg_list =3D iod->descriptors[0]; enum dma_data_direction dir =3D rq_dma_dir(req); + unsigned int attrs =3D 0; + + if (req->cmd_flags & REQ_MMIO) + attrs =3D DMA_ATTR_MMIO; =20 if (iod->nr_descriptors) { unsigned int nr_entries =3D sqe_dma_len / sizeof(*sg_list), i; =20 for (i =3D 0; i < nr_entries; i++) - dma_unmap_page(dma_dev, le64_to_cpu(sg_list[i].addr), - le32_to_cpu(sg_list[i].length), dir); + dma_unmap_phys(dma_dev, le64_to_cpu(sg_list[i].addr), + le32_to_cpu(sg_list[i].length), dir, attrs); } else { - dma_unmap_page(dma_dev, sqe_dma_addr, sqe_dma_len, dir); + dma_unmap_phys(dma_dev, sqe_dma_addr, sqe_dma_len, dir, attrs); } } =20 --=20 2.50.1