From nobody Thu Oct 30 18:20:50 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=quarantine dis=none) header.from=kernel.org ARC-Seal: i=1; a=rsa-sha256; t=1760519618; cv=none; d=zohomail.com; s=zohoarc; b=bxG8XpgDNbv3xlT16zzg+jcaSd/juPllDwh1703j9Z18j9VzJ5RI5D9au1oLfjzd96xZi+Q06SQLn73/X7dAKZqF4g0uklhL66ZnFmgrkYBSXT7Nam8eO1+Ixvx/ddUr1Zewz+K2mCYm3srEh9INMDCG9FJGCKyul7S35cT7sys= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1760519618; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=cdtHWxZ6P2NkYgYfq08A1DpxEqWfwtgmlHw9aB3IZ2I=; b=E5Wzz1u+yaCNE6tG2rl4f2N/iTYgWbPtfltmiM3BeN6MlLK7cWlsaUaGygXRifPVmr335MVzouHyBtxQHWS0qcqNZJvvu6jeKAPxMNCPHjFryd47zpDL/Xn6xsZR9atrn1lBsiZb3PBH9oWi2JenKiWWiVv6sT6z+nMzYoMQzXY= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=quarantine dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1760519618859547.050519396392; Wed, 15 Oct 2025 02:13:38 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.1143386.1477130 (Exim 4.92) (envelope-from ) id 1v8xZT-0003U7-Gf; Wed, 15 Oct 2025 09:13:23 +0000 Received: by outflank-mailman (output) from mailman id 1143386.1477130; Wed, 15 Oct 2025 09:13:23 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1v8xZT-0003Tx-D2; Wed, 15 Oct 2025 09:13:23 +0000 Received: by outflank-mailman (input) for mailman id 1143386; Wed, 15 Oct 2025 09:13:21 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1v8xZR-0002lR-Qg for xen-devel@lists.xenproject.org; Wed, 15 Oct 2025 09:13:21 +0000 Received: from tor.source.kernel.org (tor.source.kernel.org [172.105.4.254]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 3173b207-a9a7-11f0-9d15-b5c5bf9af7f9; Wed, 15 Oct 2025 11:13:21 +0200 (CEST) Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by tor.source.kernel.org (Postfix) with ESMTP id 9A04462562; Wed, 15 Oct 2025 09:13:19 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id A0DD9C4CEF9; Wed, 15 Oct 2025 09:13:18 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 3173b207-a9a7-11f0-9d15-b5c5bf9af7f9 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1760519599; bh=qYoZvd4lyrIfbZIzXqdWJj+pS2O0SyaBOmNGoxPhV3c=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=K32xE455hl5TVMpibUH5sKxJsNq9bwl5OqqfqVfG5T1vfNx7YwavLcHlsjOg3GXwk hYqGo/fxqXFWovR0Ub9T4nwB8lubtw9t5kO3T/yVWfEI02WjdOoMVuxpkn06foahvf sCVnSkss7qhKvKOg84UqhQPZbxa2z7aRAPqjuXLuQA/2tXC87ALNSFNv6SwLZS5wLb ZtHADNp8XqlLR/fauneGBjdte7ABPRafvH+H9rc2EsWGvV7fvYhEWAX5RAYQJxorEL dAClqqYD5IOiBqqCH6VsD/gsHKz9zVZ8+cK3BADKhGGNtvNSNqIggZpAYTaHEXLL8Z JjKx5x8PvYazQ== From: Leon Romanovsky To: Marek Szyprowski , Robin Murphy , Russell King , Juergen Gross , Stefano Stabellini , Oleksandr Tyshchenko , Richard Henderson , Matt Turner , Thomas Bogendoerfer , "James E.J. Bottomley" , Helge Deller , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Geoff Levand , "David S. Miller" , Andreas Larsson , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" Cc: iommu@lists.linux.dev, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, xen-devel@lists.xenproject.org, linux-alpha@vger.kernel.org, linux-mips@vger.kernel.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, sparclinux@vger.kernel.org, Jason Gunthorpe , Jason Gunthorpe Subject: [PATCH v5 01/14] dma-mapping: prepare dma_map_ops to conversion to physical address Date: Wed, 15 Oct 2025 12:12:47 +0300 Message-ID: <20251015-remove-map-page-v5-1-3bbfe3a25cdf@kernel.org> X-Mailer: git-send-email 2.51.0 In-Reply-To: <20251015-remove-map-page-v5-0-3bbfe3a25cdf@kernel.org> References: <20251015-remove-map-page-v5-0-3bbfe3a25cdf@kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" X-Mailer: b4 0.15-dev Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @kernel.org) X-ZM-MESSAGEID: 1760519649565154100 From: Leon Romanovsky Add new .map_phys() and .unmap_phys() callbacks to dma_map_ops as a preparation to replace .map_page() and .unmap_page() respectively. Reviewed-by: Jason Gunthorpe Signed-off-by: Leon Romanovsky --- include/linux/dma-map-ops.h | 7 +++++++ kernel/dma/mapping.c | 4 ++++ kernel/dma/ops_helpers.c | 12 ++++++++++-- 3 files changed, 21 insertions(+), 2 deletions(-) diff --git a/include/linux/dma-map-ops.h b/include/linux/dma-map-ops.h index 10882d00cb17..79d2a74d4b49 100644 --- a/include/linux/dma-map-ops.h +++ b/include/linux/dma-map-ops.h @@ -37,6 +37,13 @@ struct dma_map_ops { void (*unmap_page)(struct device *dev, dma_addr_t dma_handle, size_t size, enum dma_data_direction dir, unsigned long attrs); + + dma_addr_t (*map_phys)(struct device *dev, phys_addr_t phys, + size_t size, enum dma_data_direction dir, + unsigned long attrs); + void (*unmap_phys)(struct device *dev, dma_addr_t dma_handle, + size_t size, enum dma_data_direction dir, + unsigned long attrs); /* * map_sg should return a negative error code on error. See * dma_map_sgtable() for a list of appropriate error codes diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c index fe7472f13b10..4080aebe5deb 100644 --- a/kernel/dma/mapping.c +++ b/kernel/dma/mapping.c @@ -169,6 +169,8 @@ dma_addr_t dma_map_phys(struct device *dev, phys_addr_t= phys, size_t size, addr =3D dma_direct_map_phys(dev, phys, size, dir, attrs); else if (use_dma_iommu(dev)) addr =3D iommu_dma_map_phys(dev, phys, size, dir, attrs); + else if (ops->map_phys) + addr =3D ops->map_phys(dev, phys, size, dir, attrs); else if (is_mmio) { if (!ops->map_resource) return DMA_MAPPING_ERROR; @@ -223,6 +225,8 @@ void dma_unmap_phys(struct device *dev, dma_addr_t addr= , size_t size, dma_direct_unmap_phys(dev, addr, size, dir, attrs); else if (use_dma_iommu(dev)) iommu_dma_unmap_phys(dev, addr, size, dir, attrs); + else if (ops->unmap_phys) + ops->unmap_phys(dev, addr, size, dir, attrs); else if (is_mmio) { if (ops->unmap_resource) ops->unmap_resource(dev, addr, size, dir, attrs); diff --git a/kernel/dma/ops_helpers.c b/kernel/dma/ops_helpers.c index 6f9d604d9d40..1eccbdbc99c1 100644 --- a/kernel/dma/ops_helpers.c +++ b/kernel/dma/ops_helpers.c @@ -64,6 +64,7 @@ struct page *dma_common_alloc_pages(struct device *dev, s= ize_t size, { const struct dma_map_ops *ops =3D get_dma_ops(dev); struct page *page; + phys_addr_t phys; =20 page =3D dma_alloc_contiguous(dev, size, gfp); if (!page) @@ -71,9 +72,13 @@ struct page *dma_common_alloc_pages(struct device *dev, = size_t size, if (!page) return NULL; =20 + phys =3D page_to_phys(page); if (use_dma_iommu(dev)) - *dma_handle =3D iommu_dma_map_phys(dev, page_to_phys(page), size, - dir, DMA_ATTR_SKIP_CPU_SYNC); + *dma_handle =3D iommu_dma_map_phys(dev, phys, size, dir, + DMA_ATTR_SKIP_CPU_SYNC); + else if (ops->map_phys) + *dma_handle =3D ops->map_phys(dev, phys, size, dir, + DMA_ATTR_SKIP_CPU_SYNC); else *dma_handle =3D ops->map_page(dev, page, 0, size, dir, DMA_ATTR_SKIP_CPU_SYNC); @@ -94,6 +99,9 @@ void dma_common_free_pages(struct device *dev, size_t siz= e, struct page *page, if (use_dma_iommu(dev)) iommu_dma_unmap_phys(dev, dma_handle, size, dir, DMA_ATTR_SKIP_CPU_SYNC); + else if (ops->unmap_phys) + ops->unmap_phys(dev, dma_handle, size, dir, + DMA_ATTR_SKIP_CPU_SYNC); else if (ops->unmap_page) ops->unmap_page(dev, dma_handle, size, dir, DMA_ATTR_SKIP_CPU_SYNC); --=20 2.51.0 From nobody Thu Oct 30 18:20:50 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=quarantine dis=none) header.from=kernel.org ARC-Seal: i=1; a=rsa-sha256; t=1760519621; cv=none; d=zohomail.com; s=zohoarc; b=nyxf8rBJNZT+zUNq1ia1J02oOJLovxy3J5JAv303rdc9+WG1BEnW/llssnuwa/pPX4HLAp6wVQHMo0S/TMrHwl775PYY47YArrLxHYy+FvIzzfBsiWdk8hXd3RYXltPukCsuBUD3ZkRu69hnzNMM3n8yDDHkLehDEOjkM/iftr8= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1760519621; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=eE1FTCbPJO6F8EueDMdWrrKJfVX8OHMOkIhf47nagtE=; b=bgLdNLwf8z0mMThQJHBJd0Rg6PUVT5/2qorpXdWMlbkVtn0H6ZPyJ+iSRZV8FltUHd7C+67xb+ZjXKrE9juOAcxUe2WjoHElodz5JRESRDCiTakYWn4OlbdQuFSbVN3JaHyBQ0sJrCDRsqpGYGj7NQVMd+VFPSqmd1eiXhZnbhg= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=quarantine dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1760519621707587.9787049092567; Wed, 15 Oct 2025 02:13:41 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.1143383.1477100 (Exim 4.92) (envelope-from ) id 1v8xZM-0002lj-Od; Wed, 15 Oct 2025 09:13:16 +0000 Received: by outflank-mailman (output) from mailman id 1143383.1477100; Wed, 15 Oct 2025 09:13:16 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1v8xZM-0002lc-Kk; Wed, 15 Oct 2025 09:13:16 +0000 Received: by outflank-mailman (input) for mailman id 1143383; Wed, 15 Oct 2025 09:13:15 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1v8xZL-0002lR-F4 for xen-devel@lists.xenproject.org; Wed, 15 Oct 2025 09:13:15 +0000 Received: from tor.source.kernel.org (tor.source.kernel.org [172.105.4.254]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 2d440bd4-a9a7-11f0-9d15-b5c5bf9af7f9; Wed, 15 Oct 2025 11:13:14 +0200 (CEST) Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by tor.source.kernel.org (Postfix) with ESMTP id 808BC62555; Wed, 15 Oct 2025 09:13:12 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 84E0DC4CEF9; Wed, 15 Oct 2025 09:13:11 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 2d440bd4-a9a7-11f0-9d15-b5c5bf9af7f9 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1760519592; bh=fj+wjstxPuu4pe6Ir9X+TCy/RdK5uIxhfZIK9PjQE3s=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=oQijT4BYDA7G26UJzNdP8FR+Sso17sVT6tLq404JDkxEXIL0SNUGgqKnqLAytRMDc ybAzgl7PHHhZyg6HwZsrKBErF8H9uQdAB918IqBZI2EmZsuuLeHmRsIh/x/CMRPuW/ hbuFpAnoTb56m0BmBdt1IW2nKqXOiKg+vIQfCq+mZUZ52Ics6N2shnm0co1H8Ntqpo tYykaD+29yiu12+E8zdxTGe/nfssPj1oRKuBWWpNG9ywOgTVIfeN4yGPnr+Uab0knw e+80KUPBDOcAIVqRMB21aCnj3es2K2B3WII0PdnnK8Q2Rvik9Y2ZxaH3Zx9omBBGbj LHFZbbKctgdsw== From: Leon Romanovsky To: Marek Szyprowski , Robin Murphy , Russell King , Juergen Gross , Stefano Stabellini , Oleksandr Tyshchenko , Richard Henderson , Matt Turner , Thomas Bogendoerfer , "James E.J. Bottomley" , Helge Deller , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Geoff Levand , "David S. Miller" , Andreas Larsson , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" Cc: iommu@lists.linux.dev, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, xen-devel@lists.xenproject.org, linux-alpha@vger.kernel.org, linux-mips@vger.kernel.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, sparclinux@vger.kernel.org, Jason Gunthorpe , Jason Gunthorpe Subject: [PATCH v5 02/14] dma-mapping: convert dummy ops to physical address mapping Date: Wed, 15 Oct 2025 12:12:48 +0300 Message-ID: <20251015-remove-map-page-v5-2-3bbfe3a25cdf@kernel.org> X-Mailer: git-send-email 2.51.0 In-Reply-To: <20251015-remove-map-page-v5-0-3bbfe3a25cdf@kernel.org> References: <20251015-remove-map-page-v5-0-3bbfe3a25cdf@kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" X-Mailer: b4 0.15-dev Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @kernel.org) X-ZM-MESSAGEID: 1760519623498158500 From: Leon Romanovsky Change dma_dummy_map_page and dma_dummy_unmap_page routines to accept physical address and rename them. Reviewed-by: Jason Gunthorpe Signed-off-by: Leon Romanovsky --- kernel/dma/dummy.c | 13 ++++++------- 1 file changed, 6 insertions(+), 7 deletions(-) diff --git a/kernel/dma/dummy.c b/kernel/dma/dummy.c index 92de80e5b057..16a51736a2a3 100644 --- a/kernel/dma/dummy.c +++ b/kernel/dma/dummy.c @@ -11,17 +11,16 @@ static int dma_dummy_mmap(struct device *dev, struct vm= _area_struct *vma, return -ENXIO; } =20 -static dma_addr_t dma_dummy_map_page(struct device *dev, struct page *page, - unsigned long offset, size_t size, enum dma_data_direction dir, - unsigned long attrs) +static dma_addr_t dma_dummy_map_phys(struct device *dev, phys_addr_t phys, + size_t size, enum dma_data_direction dir, unsigned long attrs) { return DMA_MAPPING_ERROR; } -static void dma_dummy_unmap_page(struct device *dev, dma_addr_t dma_handle, +static void dma_dummy_unmap_phys(struct device *dev, dma_addr_t dma_handle, size_t size, enum dma_data_direction dir, unsigned long attrs) { /* - * Dummy ops doesn't support map_page, so unmap_page should never be + * Dummy ops doesn't support map_phys, so unmap_page should never be * called. */ WARN_ON_ONCE(true); @@ -51,8 +50,8 @@ static int dma_dummy_supported(struct device *hwdev, u64 = mask) =20 const struct dma_map_ops dma_dummy_ops =3D { .mmap =3D dma_dummy_mmap, - .map_page =3D dma_dummy_map_page, - .unmap_page =3D dma_dummy_unmap_page, + .map_phys =3D dma_dummy_map_phys, + .unmap_phys =3D dma_dummy_unmap_phys, .map_sg =3D dma_dummy_map_sg, .unmap_sg =3D dma_dummy_unmap_sg, .dma_supported =3D dma_dummy_supported, --=20 2.51.0 From nobody Thu Oct 30 18:20:50 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=quarantine dis=none) header.from=kernel.org ARC-Seal: i=1; a=rsa-sha256; t=1760519627; cv=none; d=zohomail.com; s=zohoarc; b=Q+gIOhH0kkw7UzLveALQJPFiVkhb/TfCZPa3p78uwA0PxoCm/RsZU14jNvqY6LL3LMKt2rLAPIEmubIqd474/w2bLK3OjLyJdcCckIEmSxueioXs2CXMoPFN7c5RvZl/JmZQji0UMZ+/HBZbi/KbMuBPLge+E7bpN6Xqec7bJXY= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1760519627; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=hKwEkCXXPBhGWp9pQlBLVaD0gEElKNKMaLPB7yCkgDk=; b=m++jPLQDM8kAAwmSIBIGwXh/PR9y/PNeQxbJqjo4wOsoKyuoV5ATb8MO9O9czW548Z0ZIwT/+A7jn/MRRVychIIgu0nALQgV0m2prbpZMExhMW7zfwbF6tYPiY5ZjcMJj9urLWP5P9koqhISc0kGe92gYX43lZy3p7QDuMt1bak= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=quarantine dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1760519627542391.67871531308276; Wed, 15 Oct 2025 02:13:47 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.1143385.1477120 (Exim 4.92) (envelope-from ) id 1v8xZQ-0003Dd-9A; Wed, 15 Oct 2025 09:13:20 +0000 Received: by outflank-mailman (output) from mailman id 1143385.1477120; Wed, 15 Oct 2025 09:13:20 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1v8xZQ-0003DW-5Q; Wed, 15 Oct 2025 09:13:20 +0000 Received: by outflank-mailman (input) for mailman id 1143385; Wed, 15 Oct 2025 09:13:18 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1v8xZO-0002lR-Kt for xen-devel@lists.xenproject.org; Wed, 15 Oct 2025 09:13:18 +0000 Received: from sea.source.kernel.org (sea.source.kernel.org [2600:3c0a:e001:78e:0:1991:8:25]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 2f2fed46-a9a7-11f0-9d15-b5c5bf9af7f9; Wed, 15 Oct 2025 11:13:17 +0200 (CEST) Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sea.source.kernel.org (Postfix) with ESMTP id CDBA04A2EB; Wed, 15 Oct 2025 09:13:15 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0A1D2C116B1; Wed, 15 Oct 2025 09:13:15 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 2f2fed46-a9a7-11f0-9d15-b5c5bf9af7f9 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1760519595; bh=5CRzqWoVfMbqVNKIxqeoLG2c9tvgUeIFTheShsK3Bpg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=GfGSWwPLGtufj1swQ+n7QEGSAHQU77k7sZmwzSh3gwKsGKxwUSyhBd87hWhNEvtrl buUMXV+6O8Xu+JxlDnDhIv9r0IAD1Z0RfPmMTvm1RhyHnAh54jr/kk/B9gEmZKEBfq dyxPWuZk9aaW/5ysGOs3Lh2Xkuvkf8W36g0+yC8l1lZWgZR6XZvww3OdFVbQSeWbZ3 DBSlpFZzMALBmjKmlRreacKzyXlQmKcW4asQus02qvoKR7vSSnohS8I3XGMqS/YvX5 rvbOkMGDjGPOblMmmEof5eGSRnxj8+iqK6wr/OfWoF9xmDM6qaQh/HQoZtd5IhimPH 5sCcW9rpImS9g== From: Leon Romanovsky To: Marek Szyprowski , Robin Murphy , Russell King , Juergen Gross , Stefano Stabellini , Oleksandr Tyshchenko , Richard Henderson , Matt Turner , Thomas Bogendoerfer , "James E.J. Bottomley" , Helge Deller , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Geoff Levand , "David S. Miller" , Andreas Larsson , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" Cc: iommu@lists.linux.dev, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, xen-devel@lists.xenproject.org, linux-alpha@vger.kernel.org, linux-mips@vger.kernel.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, sparclinux@vger.kernel.org, Jason Gunthorpe , Jason Gunthorpe Subject: [PATCH v5 03/14] ARM: dma-mapping: Reduce struct page exposure in arch_sync_dma*() Date: Wed, 15 Oct 2025 12:12:49 +0300 Message-ID: <20251015-remove-map-page-v5-3-3bbfe3a25cdf@kernel.org> X-Mailer: git-send-email 2.51.0 In-Reply-To: <20251015-remove-map-page-v5-0-3bbfe3a25cdf@kernel.org> References: <20251015-remove-map-page-v5-0-3bbfe3a25cdf@kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" X-Mailer: b4 0.15-dev Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @kernel.org) X-ZM-MESSAGEID: 1760519629569158500 From: Leon Romanovsky As a preparation to changing from .map_page to use .map_phys DMA callbacks, convert arch_sync_dma*() functions to use physical addresses instead of struct page. Reviewed-by: Jason Gunthorpe Signed-off-by: Leon Romanovsky --- arch/arm/mm/dma-mapping.c | 82 ++++++++++++++++++-------------------------= ---- 1 file changed, 31 insertions(+), 51 deletions(-) diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c index 08641a936394..b0310d6762d5 100644 --- a/arch/arm/mm/dma-mapping.c +++ b/arch/arm/mm/dma-mapping.c @@ -624,16 +624,14 @@ static void __arm_dma_free(struct device *dev, size_t= size, void *cpu_addr, kfree(buf); } =20 -static void dma_cache_maint_page(struct page *page, unsigned long offset, - size_t size, enum dma_data_direction dir, +static void dma_cache_maint_page(phys_addr_t phys, size_t size, + enum dma_data_direction dir, void (*op)(const void *, size_t, int)) { - unsigned long pfn; + unsigned long offset =3D offset_in_page(phys); + unsigned long pfn =3D __phys_to_pfn(phys); size_t left =3D size; =20 - pfn =3D page_to_pfn(page) + offset / PAGE_SIZE; - offset %=3D PAGE_SIZE; - /* * A single sg entry may refer to multiple physically contiguous * pages. But we still need to process highmem pages individually. @@ -644,17 +642,18 @@ static void dma_cache_maint_page(struct page *page, u= nsigned long offset, size_t len =3D left; void *vaddr; =20 - page =3D pfn_to_page(pfn); - - if (PageHighMem(page)) { + phys =3D __pfn_to_phys(pfn); + if (PhysHighMem(phys)) { if (len + offset > PAGE_SIZE) len =3D PAGE_SIZE - offset; =20 if (cache_is_vipt_nonaliasing()) { - vaddr =3D kmap_atomic(page); + vaddr =3D kmap_atomic_pfn(pfn); op(vaddr + offset, len, dir); kunmap_atomic(vaddr); } else { + struct page *page =3D phys_to_page(phys); + vaddr =3D kmap_high_get(page); if (vaddr) { op(vaddr + offset, len, dir); @@ -662,7 +661,8 @@ static void dma_cache_maint_page(struct page *page, uns= igned long offset, } } } else { - vaddr =3D page_address(page) + offset; + phys +=3D offset; + vaddr =3D phys_to_virt(phys); op(vaddr, len, dir); } offset =3D 0; @@ -676,14 +676,11 @@ static void dma_cache_maint_page(struct page *page, u= nsigned long offset, * Note: Drivers should NOT use this function directly. * Use the driver DMA support - see dma-mapping.h (dma_sync_*) */ -static void __dma_page_cpu_to_dev(struct page *page, unsigned long off, - size_t size, enum dma_data_direction dir) +void arch_sync_dma_for_device(phys_addr_t paddr, size_t size, + enum dma_data_direction dir) { - phys_addr_t paddr; - - dma_cache_maint_page(page, off, size, dir, dmac_map_area); + dma_cache_maint_page(paddr, size, dir, dmac_map_area); =20 - paddr =3D page_to_phys(page) + off; if (dir =3D=3D DMA_FROM_DEVICE) { outer_inv_range(paddr, paddr + size); } else { @@ -692,17 +689,15 @@ static void __dma_page_cpu_to_dev(struct page *page, = unsigned long off, /* FIXME: non-speculating: flush on bidirectional mappings? */ } =20 -static void __dma_page_dev_to_cpu(struct page *page, unsigned long off, - size_t size, enum dma_data_direction dir) +void arch_sync_dma_for_cpu(phys_addr_t paddr, size_t size, + enum dma_data_direction dir) { - phys_addr_t paddr =3D page_to_phys(page) + off; - /* FIXME: non-speculating: not required */ /* in any case, don't bother invalidating if DMA to device */ if (dir !=3D DMA_TO_DEVICE) { outer_inv_range(paddr, paddr + size); =20 - dma_cache_maint_page(page, off, size, dir, dmac_unmap_area); + dma_cache_maint_page(paddr, size, dir, dmac_unmap_area); } =20 /* @@ -1205,7 +1200,7 @@ static int __map_sg_chunk(struct device *dev, struct = scatterlist *sg, unsigned int len =3D PAGE_ALIGN(s->offset + s->length); =20 if (!dev->dma_coherent && !(attrs & DMA_ATTR_SKIP_CPU_SYNC)) - __dma_page_cpu_to_dev(sg_page(s), s->offset, s->length, dir); + arch_sync_dma_for_device(sg_phys(s), s->length, dir); =20 prot =3D __dma_info_to_prot(dir, attrs); =20 @@ -1307,8 +1302,7 @@ static void arm_iommu_unmap_sg(struct device *dev, __iommu_remove_mapping(dev, sg_dma_address(s), sg_dma_len(s)); if (!dev->dma_coherent && !(attrs & DMA_ATTR_SKIP_CPU_SYNC)) - __dma_page_dev_to_cpu(sg_page(s), s->offset, - s->length, dir); + arch_sync_dma_for_cpu(sg_phys(s), s->length, dir); } } =20 @@ -1330,7 +1324,7 @@ static void arm_iommu_sync_sg_for_cpu(struct device *= dev, return; =20 for_each_sg(sg, s, nents, i) - __dma_page_dev_to_cpu(sg_page(s), s->offset, s->length, dir); + arch_sync_dma_for_cpu(sg_phys(s), s->length, dir); =20 } =20 @@ -1352,7 +1346,7 @@ static void arm_iommu_sync_sg_for_device(struct devic= e *dev, return; =20 for_each_sg(sg, s, nents, i) - __dma_page_cpu_to_dev(sg_page(s), s->offset, s->length, dir); + arch_sync_dma_for_device(sg_phys(s), s->length, dir); } =20 /** @@ -1374,7 +1368,7 @@ static dma_addr_t arm_iommu_map_page(struct device *d= ev, struct page *page, int ret, prot, len =3D PAGE_ALIGN(size + offset); =20 if (!dev->dma_coherent && !(attrs & DMA_ATTR_SKIP_CPU_SYNC)) - __dma_page_cpu_to_dev(page, offset, size, dir); + arch_sync_dma_for_device(page_to_phys(page), offset, size, dir); =20 dma_addr =3D __alloc_iova(mapping, len); if (dma_addr =3D=3D DMA_MAPPING_ERROR) @@ -1407,7 +1401,6 @@ static void arm_iommu_unmap_page(struct device *dev, = dma_addr_t handle, { struct dma_iommu_mapping *mapping =3D to_dma_iommu_mapping(dev); dma_addr_t iova =3D handle & PAGE_MASK; - struct page *page; int offset =3D handle & ~PAGE_MASK; int len =3D PAGE_ALIGN(size + offset); =20 @@ -1415,8 +1408,9 @@ static void arm_iommu_unmap_page(struct device *dev, = dma_addr_t handle, return; =20 if (!dev->dma_coherent && !(attrs & DMA_ATTR_SKIP_CPU_SYNC)) { - page =3D phys_to_page(iommu_iova_to_phys(mapping->domain, iova)); - __dma_page_dev_to_cpu(page, offset, size, dir); + phys_addr_t phys =3D iommu_iova_to_phys(mapping->domain, iova); + + arch_sync_dma_for_cpu(phys + offset, size, dir); } =20 iommu_unmap(mapping->domain, iova, len); @@ -1485,14 +1479,14 @@ static void arm_iommu_sync_single_for_cpu(struct de= vice *dev, { struct dma_iommu_mapping *mapping =3D to_dma_iommu_mapping(dev); dma_addr_t iova =3D handle & PAGE_MASK; - struct page *page; unsigned int offset =3D handle & ~PAGE_MASK; + phys_addr_t phys; =20 if (dev->dma_coherent || !iova) return; =20 - page =3D phys_to_page(iommu_iova_to_phys(mapping->domain, iova)); - __dma_page_dev_to_cpu(page, offset, size, dir); + phys =3D iommu_iova_to_phys(mapping->domain, iova); + arch_sync_dma_for_cpu(phys + offset, size, dir); } =20 static void arm_iommu_sync_single_for_device(struct device *dev, @@ -1500,14 +1494,14 @@ static void arm_iommu_sync_single_for_device(struct= device *dev, { struct dma_iommu_mapping *mapping =3D to_dma_iommu_mapping(dev); dma_addr_t iova =3D handle & PAGE_MASK; - struct page *page; unsigned int offset =3D handle & ~PAGE_MASK; + phys_addr_t phys; =20 if (dev->dma_coherent || !iova) return; =20 - page =3D phys_to_page(iommu_iova_to_phys(mapping->domain, iova)); - __dma_page_cpu_to_dev(page, offset, size, dir); + phys =3D iommu_iova_to_phys(mapping->domain, iova); + arch_sync_dma_for_device(phys + offset, size, dir); } =20 static const struct dma_map_ops iommu_ops =3D { @@ -1794,20 +1788,6 @@ void arch_teardown_dma_ops(struct device *dev) set_dma_ops(dev, NULL); } =20 -void arch_sync_dma_for_device(phys_addr_t paddr, size_t size, - enum dma_data_direction dir) -{ - __dma_page_cpu_to_dev(phys_to_page(paddr), paddr & (PAGE_SIZE - 1), - size, dir); -} - -void arch_sync_dma_for_cpu(phys_addr_t paddr, size_t size, - enum dma_data_direction dir) -{ - __dma_page_dev_to_cpu(phys_to_page(paddr), paddr & (PAGE_SIZE - 1), - size, dir); -} - void *arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *dma_hand= le, gfp_t gfp, unsigned long attrs) { --=20 2.51.0 From nobody Thu Oct 30 18:20:50 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=quarantine dis=none) header.from=kernel.org ARC-Seal: i=1; a=rsa-sha256; t=1760519803; cv=none; d=zohomail.com; s=zohoarc; b=NaJvJ2tQp3pnZj2kleuWv4d49fRkjpA1TOOUn6TgeQvxWupRYiDf4CzJatSzJvwYG4HHvvz5rfvhDu/thnKx+myFvDuNoU2uIligjfqJk1KMs1sffo+mPReMjEcVK0GPWjlHsMAvIHvPFqd3JnppQlyWc+FwxtEnPfDe6J0KQTg= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1760519803; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=Ky3bkn+5NnSV4K0vzDHptm86RepNqnORfPOmC0Otaa4=; b=g+KHQ2nZo7Lsi4NfNgtklPR+aCVthW2moStvn4ACKCbGdB/44sxQBQyz7pjBv0et+aSizOPxYU3WDPUCOgT4XOPyg2jEyzH8S4/ZtFmWvidODnCRQD2vNyZd5FQ/l7dwDCpbvWEZMER8luQrVWnpNz7vbJ22HphvqKbMmtbEvtE= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=quarantine dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1760519803052732.8765787750674; Wed, 15 Oct 2025 02:16:43 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.1143487.1477240 (Exim 4.92) (envelope-from ) id 1v8xcO-0000lg-UW; Wed, 15 Oct 2025 09:16:24 +0000 Received: by outflank-mailman (output) from mailman id 1143487.1477240; Wed, 15 Oct 2025 09:16:24 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1v8xcO-0000lV-R4; Wed, 15 Oct 2025 09:16:24 +0000 Received: by outflank-mailman (input) for mailman id 1143487; Wed, 15 Oct 2025 09:16:24 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1v8xZj-0002lR-Iy for xen-devel@lists.xenproject.org; Wed, 15 Oct 2025 09:13:39 +0000 Received: from sea.source.kernel.org (sea.source.kernel.org [172.234.252.31]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 3bb5cd90-a9a7-11f0-9d15-b5c5bf9af7f9; Wed, 15 Oct 2025 11:13:38 +0200 (CEST) Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sea.source.kernel.org (Postfix) with ESMTP id 2D0CC417E5; Wed, 15 Oct 2025 09:13:37 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 58D1EC4CEF9; Wed, 15 Oct 2025 09:13:36 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 3bb5cd90-a9a7-11f0-9d15-b5c5bf9af7f9 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1760519617; bh=CDM9cRqlJbJhV+/XZfBHBL1U6v3SXFrUWdt80oFtUYw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=tw71SVUT8XfKMlm1/r+zP7oJkAnZln+/ND2VKAPg1Fib6wjOCgnhmfHEo36OrBu0d b9dArUbq9Mu5UObgbXUJpRImY3vUlA3YHJV7XlCL1PrZuREFVRiKW/Lpk/ajHEklGO e76Q4clZxNno+qokZDv/kuPlTY4ELLlc0hGecwwogWcgug2MpXiJhryH+DiYEES6z0 bs9hkk4oHRsSWVybZZmy68VpF9MBIMDvvR//BLFqrFcosfeyCdJyBfTfKMOXxsVqCp vpbi+cET5ZU7rNLb5tudr+kd7UXcMUnSRUPxNKuCJxExRGqgI1DNsQsH6x5au+8Ihh B6aSY9FCWKG7g== From: Leon Romanovsky To: Marek Szyprowski , Robin Murphy , Russell King , Juergen Gross , Stefano Stabellini , Oleksandr Tyshchenko , Richard Henderson , Matt Turner , Thomas Bogendoerfer , "James E.J. Bottomley" , Helge Deller , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Geoff Levand , "David S. Miller" , Andreas Larsson , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" Cc: iommu@lists.linux.dev, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, xen-devel@lists.xenproject.org, linux-alpha@vger.kernel.org, linux-mips@vger.kernel.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, sparclinux@vger.kernel.org, Jason Gunthorpe , Jason Gunthorpe Subject: [PATCH v5 04/14] ARM: dma-mapping: Switch to physical address mapping callbacks Date: Wed, 15 Oct 2025 12:12:50 +0300 Message-ID: <20251015-remove-map-page-v5-4-3bbfe3a25cdf@kernel.org> X-Mailer: git-send-email 2.51.0 In-Reply-To: <20251015-remove-map-page-v5-0-3bbfe3a25cdf@kernel.org> References: <20251015-remove-map-page-v5-0-3bbfe3a25cdf@kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" X-Mailer: b4 0.15-dev Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @kernel.org) X-ZM-MESSAGEID: 1760519803446158500 From: Leon Romanovsky Combine resource and page mappings routines to one function, which handles both these flows at the same manner. This conversion allows us to remove .map_resource/.unmap_resource callbacks completely. Reviewed-by: Jason Gunthorpe Signed-off-by: Leon Romanovsky --- arch/arm/mm/dma-mapping.c | 100 +++++++++++-------------------------------= ---- 1 file changed, 23 insertions(+), 77 deletions(-) diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c index b0310d6762d5..a4c765d24692 100644 --- a/arch/arm/mm/dma-mapping.c +++ b/arch/arm/mm/dma-mapping.c @@ -732,6 +732,9 @@ static int __dma_info_to_prot(enum dma_data_direction d= ir, unsigned long attrs) if (attrs & DMA_ATTR_PRIVILEGED) prot |=3D IOMMU_PRIV; =20 + if (attrs & DMA_ATTR_MMIO) + prot |=3D IOMMU_MMIO; + switch (dir) { case DMA_BIDIRECTIONAL: return prot | IOMMU_READ | IOMMU_WRITE; @@ -1350,25 +1353,27 @@ static void arm_iommu_sync_sg_for_device(struct dev= ice *dev, } =20 /** - * arm_iommu_map_page + * arm_iommu_map_phys * @dev: valid struct device pointer - * @page: page that buffer resides in - * @offset: offset into page for start of buffer + * @phys: physical address that buffer resides in * @size: size of buffer to map * @dir: DMA transfer direction + * @attrs: DMA mapping attributes * * IOMMU aware version of arm_dma_map_page() */ -static dma_addr_t arm_iommu_map_page(struct device *dev, struct page *page, - unsigned long offset, size_t size, enum dma_data_direction dir, - unsigned long attrs) +static dma_addr_t arm_iommu_map_phys(struct device *dev, phys_addr_t phys, + size_t size, enum dma_data_direction dir, unsigned long attrs) { struct dma_iommu_mapping *mapping =3D to_dma_iommu_mapping(dev); + int len =3D PAGE_ALIGN(size + offset_in_page(phys)); + phys_addr_t addr =3D phys & PAGE_MASK; dma_addr_t dma_addr; - int ret, prot, len =3D PAGE_ALIGN(size + offset); + int ret, prot; =20 - if (!dev->dma_coherent && !(attrs & DMA_ATTR_SKIP_CPU_SYNC)) - arch_sync_dma_for_device(page_to_phys(page), offset, size, dir); + if (!dev->dma_coherent && + !(attrs & (DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_MMIO))) + arch_sync_dma_for_device(phys, size, dir); =20 dma_addr =3D __alloc_iova(mapping, len); if (dma_addr =3D=3D DMA_MAPPING_ERROR) @@ -1376,12 +1381,11 @@ static dma_addr_t arm_iommu_map_page(struct device = *dev, struct page *page, =20 prot =3D __dma_info_to_prot(dir, attrs); =20 - ret =3D iommu_map(mapping->domain, dma_addr, page_to_phys(page), len, - prot, GFP_KERNEL); + ret =3D iommu_map(mapping->domain, dma_addr, addr, len, prot, GFP_KERNEL); if (ret < 0) goto fail; =20 - return dma_addr + offset; + return dma_addr + offset_in_page(phys); fail: __free_iova(mapping, dma_addr, len); return DMA_MAPPING_ERROR; @@ -1393,10 +1397,11 @@ static dma_addr_t arm_iommu_map_page(struct device = *dev, struct page *page, * @handle: DMA address of buffer * @size: size of buffer (same as passed to dma_map_page) * @dir: DMA transfer direction (same as passed to dma_map_page) + * @attrs: DMA mapping attributes * - * IOMMU aware version of arm_dma_unmap_page() + * IOMMU aware version of arm_dma_unmap_phys() */ -static void arm_iommu_unmap_page(struct device *dev, dma_addr_t handle, +static void arm_iommu_unmap_phys(struct device *dev, dma_addr_t handle, size_t size, enum dma_data_direction dir, unsigned long attrs) { struct dma_iommu_mapping *mapping =3D to_dma_iommu_mapping(dev); @@ -1407,7 +1412,8 @@ static void arm_iommu_unmap_page(struct device *dev, = dma_addr_t handle, if (!iova) return; =20 - if (!dev->dma_coherent && !(attrs & DMA_ATTR_SKIP_CPU_SYNC)) { + if (!dev->dma_coherent && + !(attrs & (DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_MMIO))) { phys_addr_t phys =3D iommu_iova_to_phys(mapping->domain, iova); =20 arch_sync_dma_for_cpu(phys + offset, size, dir); @@ -1417,63 +1423,6 @@ static void arm_iommu_unmap_page(struct device *dev,= dma_addr_t handle, __free_iova(mapping, iova, len); } =20 -/** - * arm_iommu_map_resource - map a device resource for DMA - * @dev: valid struct device pointer - * @phys_addr: physical address of resource - * @size: size of resource to map - * @dir: DMA transfer direction - */ -static dma_addr_t arm_iommu_map_resource(struct device *dev, - phys_addr_t phys_addr, size_t size, - enum dma_data_direction dir, unsigned long attrs) -{ - struct dma_iommu_mapping *mapping =3D to_dma_iommu_mapping(dev); - dma_addr_t dma_addr; - int ret, prot; - phys_addr_t addr =3D phys_addr & PAGE_MASK; - unsigned int offset =3D phys_addr & ~PAGE_MASK; - size_t len =3D PAGE_ALIGN(size + offset); - - dma_addr =3D __alloc_iova(mapping, len); - if (dma_addr =3D=3D DMA_MAPPING_ERROR) - return dma_addr; - - prot =3D __dma_info_to_prot(dir, attrs) | IOMMU_MMIO; - - ret =3D iommu_map(mapping->domain, dma_addr, addr, len, prot, GFP_KERNEL); - if (ret < 0) - goto fail; - - return dma_addr + offset; -fail: - __free_iova(mapping, dma_addr, len); - return DMA_MAPPING_ERROR; -} - -/** - * arm_iommu_unmap_resource - unmap a device DMA resource - * @dev: valid struct device pointer - * @dma_handle: DMA address to resource - * @size: size of resource to map - * @dir: DMA transfer direction - */ -static void arm_iommu_unmap_resource(struct device *dev, dma_addr_t dma_ha= ndle, - size_t size, enum dma_data_direction dir, - unsigned long attrs) -{ - struct dma_iommu_mapping *mapping =3D to_dma_iommu_mapping(dev); - dma_addr_t iova =3D dma_handle & PAGE_MASK; - unsigned int offset =3D dma_handle & ~PAGE_MASK; - size_t len =3D PAGE_ALIGN(size + offset); - - if (!iova) - return; - - iommu_unmap(mapping->domain, iova, len); - __free_iova(mapping, iova, len); -} - static void arm_iommu_sync_single_for_cpu(struct device *dev, dma_addr_t handle, size_t size, enum dma_data_direction dir) { @@ -1510,8 +1459,8 @@ static const struct dma_map_ops iommu_ops =3D { .mmap =3D arm_iommu_mmap_attrs, .get_sgtable =3D arm_iommu_get_sgtable, =20 - .map_page =3D arm_iommu_map_page, - .unmap_page =3D arm_iommu_unmap_page, + .map_phys =3D arm_iommu_map_phys, + .unmap_phys =3D arm_iommu_unmap_phys, .sync_single_for_cpu =3D arm_iommu_sync_single_for_cpu, .sync_single_for_device =3D arm_iommu_sync_single_for_device, =20 @@ -1519,9 +1468,6 @@ static const struct dma_map_ops iommu_ops =3D { .unmap_sg =3D arm_iommu_unmap_sg, .sync_sg_for_cpu =3D arm_iommu_sync_sg_for_cpu, .sync_sg_for_device =3D arm_iommu_sync_sg_for_device, - - .map_resource =3D arm_iommu_map_resource, - .unmap_resource =3D arm_iommu_unmap_resource, }; =20 /** --=20 2.51.0 From nobody Thu Oct 30 18:20:50 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=quarantine dis=none) header.from=kernel.org ARC-Seal: i=1; a=rsa-sha256; t=1760519629; cv=none; d=zohomail.com; s=zohoarc; b=Ibm8BdsEtGhd/7L9ZjftEDGC3pE4RBNOUhdS5LtwBA+zcNtW4HTfaEKAbBdPU97XdcNMoUTuxPW5zL+9fVofGpMTpSjML5y7o6HLiTSFy9mOxqkMyMuNbZNOkGcKNkCM17gsU81rR/fKrpApbyMQJNmiWLlhy/qT3ZQdB6cK5B8= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1760519629; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=UkOLY0bMOAegRX2PLwNVc1xekduhJqcvNYaejGV6Jnk=; b=VAVIQNqYmGD+/tNdVodZlzNUGujSFO/rW2Ph/j4+Lf5UUxu/xgPVx6FXgvG1Uj7ZYGeG9ORQxma4/cEqOrD5KaMuDm1ZNnYZ+N/9pM7zkbm+sUmInNcFKka4+2bwlUWFwfOSEJ6dVbXOrgkCr7012UzSrJ2qvyeBSeYzI8z6KIM= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=quarantine dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 176051962985241.346909012153674; Wed, 15 Oct 2025 02:13:49 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.1143387.1477140 (Exim 4.92) (envelope-from ) id 1v8xZV-0003lM-Nw; Wed, 15 Oct 2025 09:13:25 +0000 Received: by outflank-mailman (output) from mailman id 1143387.1477140; Wed, 15 Oct 2025 09:13:25 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1v8xZV-0003lF-Kr; Wed, 15 Oct 2025 09:13:25 +0000 Received: by outflank-mailman (input) for mailman id 1143387; Wed, 15 Oct 2025 09:13:25 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1v8xZV-0002lR-4x for xen-devel@lists.xenproject.org; Wed, 15 Oct 2025 09:13:25 +0000 Received: from sea.source.kernel.org (sea.source.kernel.org [2600:3c0a:e001:78e:0:1991:8:25]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 333ee8c7-a9a7-11f0-9d15-b5c5bf9af7f9; Wed, 15 Oct 2025 11:13:24 +0200 (CEST) Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sea.source.kernel.org (Postfix) with ESMTP id EFF4940246; Wed, 15 Oct 2025 09:13:22 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 27DE5C19421; Wed, 15 Oct 2025 09:13:22 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 333ee8c7-a9a7-11f0-9d15-b5c5bf9af7f9 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1760519602; bh=VnMG1AI6BvgRLCN60ENFie/apQCA7LYrL0xFWoo7RpM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=qIrMBqnuHuFqyNXyGEUvjAWSrGuP7BHKNjv3W6RybkmzvAtE/8s4IA/VJ9GgZGWEB 7bm6DuwWphduE+9B2nQ+hsAaoABhi1mvrnEwifkp0w0oOcXl5GDl8SFAl/4Hsn2XU3 qHwmOdQ7pqCuYsn/djnTr4K19DmpdZaqVnSDdvaMvNqskBH0ZJwkuOQDubU0sCtpq5 W+VFnqQwdlPVM8EnzBLjjI3VgMDfGi1EtCEG8d7wYwOwwWXwNpe/pcGzB9IKEGu2uL HHQzw8zSVXRKXjzRor9TvOjzpRxPPnksJzz0TpmzYizTMT+05dps2oNMrK/FjdOH2y mqyatvpalS22Q== From: Leon Romanovsky To: Marek Szyprowski , Robin Murphy , Russell King , Juergen Gross , Stefano Stabellini , Oleksandr Tyshchenko , Richard Henderson , Matt Turner , Thomas Bogendoerfer , "James E.J. Bottomley" , Helge Deller , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Geoff Levand , "David S. Miller" , Andreas Larsson , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" Cc: iommu@lists.linux.dev, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, xen-devel@lists.xenproject.org, linux-alpha@vger.kernel.org, linux-mips@vger.kernel.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, sparclinux@vger.kernel.org, Jason Gunthorpe , Jason Gunthorpe Subject: [PATCH v5 05/14] xen: swiotlb: Switch to physical address mapping callbacks Date: Wed, 15 Oct 2025 12:12:51 +0300 Message-ID: <20251015-remove-map-page-v5-5-3bbfe3a25cdf@kernel.org> X-Mailer: git-send-email 2.51.0 In-Reply-To: <20251015-remove-map-page-v5-0-3bbfe3a25cdf@kernel.org> References: <20251015-remove-map-page-v5-0-3bbfe3a25cdf@kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" X-Mailer: b4 0.15-dev Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @kernel.org) X-ZM-MESSAGEID: 1760519631806154100 From: Leon Romanovsky Combine resource and page mappings routines to one function and remove .map_resource/.unmap_resource callbacks completely. Reviewed-by: Jason Gunthorpe Signed-off-by: Leon Romanovsky --- drivers/xen/swiotlb-xen.c | 63 ++++++++++++++++++++++---------------------= ---- 1 file changed, 29 insertions(+), 34 deletions(-) diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c index dd7747a2de87..ccf25027bec1 100644 --- a/drivers/xen/swiotlb-xen.c +++ b/drivers/xen/swiotlb-xen.c @@ -200,17 +200,32 @@ xen_swiotlb_free_coherent(struct device *dev, size_t = size, void *vaddr, * physical address to use is returned. * * Once the device is given the dma address, the device owns this memory u= ntil - * either xen_swiotlb_unmap_page or xen_swiotlb_dma_sync_single is perform= ed. + * either xen_swiotlb_unmap_phys or xen_swiotlb_dma_sync_single is perform= ed. */ -static dma_addr_t xen_swiotlb_map_page(struct device *dev, struct page *pa= ge, - unsigned long offset, size_t size, - enum dma_data_direction dir, +static dma_addr_t xen_swiotlb_map_phys(struct device *dev, phys_addr_t phy= s, + size_t size, enum dma_data_direction dir, unsigned long attrs) { - phys_addr_t map, phys =3D page_to_phys(page) + offset; - dma_addr_t dev_addr =3D xen_phys_to_dma(dev, phys); + dma_addr_t dev_addr; + phys_addr_t map; =20 BUG_ON(dir =3D=3D DMA_NONE); + + if (attrs & DMA_ATTR_MMIO) { + if (unlikely(!dma_capable(dev, phys, size, false))) { + dev_err_once( + dev, + "DMA addr %pa+%zu overflow (mask %llx, bus limit %llx).\n", + &phys, size, *dev->dma_mask, + dev->bus_dma_limit); + WARN_ON_ONCE(1); + return DMA_MAPPING_ERROR; + } + return phys; + } + + dev_addr =3D xen_phys_to_dma(dev, phys); + /* * If the address happens to be in the device's DMA window, * we can safely return the device addr and not worry about bounce @@ -257,13 +272,13 @@ static dma_addr_t xen_swiotlb_map_page(struct device = *dev, struct page *page, =20 /* * Unmap a single streaming mode DMA translation. The dma_addr and size m= ust - * match what was provided for in a previous xen_swiotlb_map_page call. A= ll + * match what was provided for in a previous xen_swiotlb_map_phys call. A= ll * other usages are undefined. * * After this call, reads by the cpu to the buffer are guaranteed to see * whatever the device wrote there. */ -static void xen_swiotlb_unmap_page(struct device *hwdev, dma_addr_t dev_ad= dr, +static void xen_swiotlb_unmap_phys(struct device *hwdev, dma_addr_t dev_ad= dr, size_t size, enum dma_data_direction dir, unsigned long attrs) { phys_addr_t paddr =3D xen_dma_to_phys(hwdev, dev_addr); @@ -325,7 +340,7 @@ xen_swiotlb_sync_single_for_device(struct device *dev, = dma_addr_t dma_addr, =20 /* * Unmap a set of streaming mode DMA translations. Again, cpu read rules - * concerning calls here are the same as for swiotlb_unmap_page() above. + * concerning calls here are the same as for swiotlb_unmap_phys() above. */ static void xen_swiotlb_unmap_sg(struct device *hwdev, struct scatterlist *sgl, int ne= lems, @@ -337,7 +352,7 @@ xen_swiotlb_unmap_sg(struct device *hwdev, struct scatt= erlist *sgl, int nelems, BUG_ON(dir =3D=3D DMA_NONE); =20 for_each_sg(sgl, sg, nelems, i) - xen_swiotlb_unmap_page(hwdev, sg->dma_address, sg_dma_len(sg), + xen_swiotlb_unmap_phys(hwdev, sg->dma_address, sg_dma_len(sg), dir, attrs); =20 } @@ -352,8 +367,8 @@ xen_swiotlb_map_sg(struct device *dev, struct scatterli= st *sgl, int nelems, BUG_ON(dir =3D=3D DMA_NONE); =20 for_each_sg(sgl, sg, nelems, i) { - sg->dma_address =3D xen_swiotlb_map_page(dev, sg_page(sg), - sg->offset, sg->length, dir, attrs); + sg->dma_address =3D xen_swiotlb_map_phys(dev, sg_phys(sg), + sg->length, dir, attrs); if (sg->dma_address =3D=3D DMA_MAPPING_ERROR) goto out_unmap; sg_dma_len(sg) =3D sg->length; @@ -392,25 +407,6 @@ xen_swiotlb_sync_sg_for_device(struct device *dev, str= uct scatterlist *sgl, } } =20 -static dma_addr_t xen_swiotlb_direct_map_resource(struct device *dev, - phys_addr_t paddr, - size_t size, - enum dma_data_direction dir, - unsigned long attrs) -{ - dma_addr_t dma_addr =3D paddr; - - if (unlikely(!dma_capable(dev, dma_addr, size, false))) { - dev_err_once(dev, - "DMA addr %pad+%zu overflow (mask %llx, bus limit %llx).\n", - &dma_addr, size, *dev->dma_mask, dev->bus_dma_limit); - WARN_ON_ONCE(1); - return DMA_MAPPING_ERROR; - } - - return dma_addr; -} - /* * Return whether the given device DMA address mask can be supported * properly. For example, if your device can only drive the low 24-bits @@ -437,13 +433,12 @@ const struct dma_map_ops xen_swiotlb_dma_ops =3D { .sync_sg_for_device =3D xen_swiotlb_sync_sg_for_device, .map_sg =3D xen_swiotlb_map_sg, .unmap_sg =3D xen_swiotlb_unmap_sg, - .map_page =3D xen_swiotlb_map_page, - .unmap_page =3D xen_swiotlb_unmap_page, + .map_phys =3D xen_swiotlb_map_phys, + .unmap_phys =3D xen_swiotlb_unmap_phys, .dma_supported =3D xen_swiotlb_dma_supported, .mmap =3D dma_common_mmap, .get_sgtable =3D dma_common_get_sgtable, .alloc_pages_op =3D dma_common_alloc_pages, .free_pages =3D dma_common_free_pages, .max_mapping_size =3D swiotlb_max_mapping_size, - .map_resource =3D xen_swiotlb_direct_map_resource, }; --=20 2.51.0 From nobody Thu Oct 30 18:20:50 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=quarantine dis=none) header.from=kernel.org ARC-Seal: i=1; a=rsa-sha256; t=1760519623; cv=none; d=zohomail.com; s=zohoarc; b=mMSbEL8eUxnBlF0rMVqNnUiqcHcQJEXE6FDj0UxoeV8NJYYPOUoLQm/unM8CE/xsJ5igH6W4B4+Y3rg5TdrYqbWD9xFY6dW2FIoFbI0NJMpszHoWwkrhcynfmX3sHduVyDOnSPXxMhFlSGvwgOpFQlUquzJxgH7zjPrBqXEal8E= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1760519623; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=oB3+p/Mv9QvTJS4rOTeBrLOF3LEDsZyakK2v4O50YlY=; b=G35SqGqawGYGFE0IJcYvP9igpNI+Jyo3blsqkOu3VupAF0LnBx+lgM9XNLb+C5i/ZCVj0vetyYmLOuw+Kk9nB+pXAgkWzmDQh672ZZYuS4kLaHQZYiJRWsJ6pwjqRiiPRdHiASAGn8pC1flxe4wURE9JZzEr4XzXhEVOZsT6SOc= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=quarantine dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1760519623854256.2072873439903; Wed, 15 Oct 2025 02:13:43 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.1143392.1477150 (Exim 4.92) (envelope-from ) id 1v8xZb-0004A1-1F; Wed, 15 Oct 2025 09:13:31 +0000 Received: by outflank-mailman (output) from mailman id 1143392.1477150; Wed, 15 Oct 2025 09:13:30 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1v8xZa-00049r-SP; Wed, 15 Oct 2025 09:13:30 +0000 Received: by outflank-mailman (input) for mailman id 1143392; Wed, 15 Oct 2025 09:13:29 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1v8xZZ-0002lL-TR for xen-devel@lists.xenproject.org; Wed, 15 Oct 2025 09:13:29 +0000 Received: from tor.source.kernel.org (tor.source.kernel.org [172.105.4.254]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 35bc89d8-a9a7-11f0-980a-7dc792cee155; Wed, 15 Oct 2025 11:13:28 +0200 (CEST) Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by tor.source.kernel.org (Postfix) with ESMTP id B57B563BF2; Wed, 15 Oct 2025 09:13:26 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id B6168C4CEF8; Wed, 15 Oct 2025 09:13:25 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 35bc89d8-a9a7-11f0-980a-7dc792cee155 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1760519606; bh=8N5ZxsYLVcDZve6x5HHk+A1JboM5EKxL5gdSKO7Ns2A=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=JiSeBjBUKhVwymer3tPYHzwIFC7GJm10nXZ/10XtCNi51kg88RJS1w18W4pvJ7NeM 3OwVmb0T1vu2KwfDWjx1io+HgFyPxO0MgtEwlMotfiJ179Y1QgqMPkRKjDJqt5Ao/+ VjZfp9uISkci3ImI/5C8oEZdlj4Z+oNMqJTyULPa769DD302IN/VxJv7rJehAO4LNe KeIuGfTQFzyYPb83xzSoo0Sz6I9iQ3Uy0TnyhPQrAye6SrySQGjy96frRmnA4dlk+5 xWdb7xuNoLeNJ5dtzrtaGqkyWLhMMmZy2qZFkbs1eHx+/CfxBpfdsutxVsk+ta+jCH 50a30dGZYCtdA== From: Leon Romanovsky To: Marek Szyprowski , Robin Murphy , Russell King , Juergen Gross , Stefano Stabellini , Oleksandr Tyshchenko , Richard Henderson , Matt Turner , Thomas Bogendoerfer , "James E.J. Bottomley" , Helge Deller , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Geoff Levand , "David S. Miller" , Andreas Larsson , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" Cc: iommu@lists.linux.dev, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, xen-devel@lists.xenproject.org, linux-alpha@vger.kernel.org, linux-mips@vger.kernel.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, sparclinux@vger.kernel.org, Jason Gunthorpe , Jason Gunthorpe Subject: [PATCH v5 06/14] dma-mapping: remove unused mapping resource callbacks Date: Wed, 15 Oct 2025 12:12:52 +0300 Message-ID: <20251015-remove-map-page-v5-6-3bbfe3a25cdf@kernel.org> X-Mailer: git-send-email 2.51.0 In-Reply-To: <20251015-remove-map-page-v5-0-3bbfe3a25cdf@kernel.org> References: <20251015-remove-map-page-v5-0-3bbfe3a25cdf@kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" X-Mailer: b4 0.15-dev Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @kernel.org) X-ZM-MESSAGEID: 1760519625747154100 From: Leon Romanovsky After ARM and XEN conversions to use physical addresses for the mapping, there are no in-kernel users for map_resource/unmap_resource callbacks, so remove them. Reviewed-by: Jason Gunthorpe Signed-off-by: Leon Romanovsky --- include/linux/dma-map-ops.h | 6 ------ kernel/dma/mapping.c | 16 ++++------------ 2 files changed, 4 insertions(+), 18 deletions(-) diff --git a/include/linux/dma-map-ops.h b/include/linux/dma-map-ops.h index 79d2a74d4b49..2e98ecc313a3 100644 --- a/include/linux/dma-map-ops.h +++ b/include/linux/dma-map-ops.h @@ -53,12 +53,6 @@ struct dma_map_ops { enum dma_data_direction dir, unsigned long attrs); void (*unmap_sg)(struct device *dev, struct scatterlist *sg, int nents, enum dma_data_direction dir, unsigned long attrs); - dma_addr_t (*map_resource)(struct device *dev, phys_addr_t phys_addr, - size_t size, enum dma_data_direction dir, - unsigned long attrs); - void (*unmap_resource)(struct device *dev, dma_addr_t dma_handle, - size_t size, enum dma_data_direction dir, - unsigned long attrs); void (*sync_single_for_cpu)(struct device *dev, dma_addr_t dma_handle, size_t size, enum dma_data_direction dir); void (*sync_single_for_device)(struct device *dev, diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c index 4080aebe5deb..32a85bfdf873 100644 --- a/kernel/dma/mapping.c +++ b/kernel/dma/mapping.c @@ -157,7 +157,7 @@ dma_addr_t dma_map_phys(struct device *dev, phys_addr_t= phys, size_t size, { const struct dma_map_ops *ops =3D get_dma_ops(dev); bool is_mmio =3D attrs & DMA_ATTR_MMIO; - dma_addr_t addr; + dma_addr_t addr =3D DMA_MAPPING_ERROR; =20 BUG_ON(!valid_dma_direction(dir)); =20 @@ -171,18 +171,13 @@ dma_addr_t dma_map_phys(struct device *dev, phys_addr= _t phys, size_t size, addr =3D iommu_dma_map_phys(dev, phys, size, dir, attrs); else if (ops->map_phys) addr =3D ops->map_phys(dev, phys, size, dir, attrs); - else if (is_mmio) { - if (!ops->map_resource) - return DMA_MAPPING_ERROR; - - addr =3D ops->map_resource(dev, phys, size, dir, attrs); - } else { + else if (!is_mmio && ops->map_page) { struct page *page =3D phys_to_page(phys); size_t offset =3D offset_in_page(phys); =20 /* * The dma_ops API contract for ops->map_page() requires - * kmappable memory, while ops->map_resource() does not. + * kmappable memory. */ addr =3D ops->map_page(dev, page, offset, size, dir, attrs); } @@ -227,10 +222,7 @@ void dma_unmap_phys(struct device *dev, dma_addr_t add= r, size_t size, iommu_dma_unmap_phys(dev, addr, size, dir, attrs); else if (ops->unmap_phys) ops->unmap_phys(dev, addr, size, dir, attrs); - else if (is_mmio) { - if (ops->unmap_resource) - ops->unmap_resource(dev, addr, size, dir, attrs); - } else + else ops->unmap_page(dev, addr, size, dir, attrs); trace_dma_unmap_phys(dev, addr, size, dir, attrs); debug_dma_unmap_phys(dev, addr, size, dir); --=20 2.51.0 From nobody Thu Oct 30 18:20:50 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=quarantine dis=none) header.from=kernel.org ARC-Seal: i=1; a=rsa-sha256; t=1760519634; cv=none; d=zohomail.com; s=zohoarc; b=f37VnD4OohzFomxxSLHwimpgBwZYLbPMjAOnqnFdWGN0sMvn5/LJE2rl2oNyRoaoxN7GtQilNzlGbohi/IkE3g5sxVJX+Sh1OBZRLDOf3a1keHEX7PiwWZuiHjIGWXpmdPZniCnhVNlOUXCxRP7K/PAToCM++SLDzzhxBcGhvww= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1760519634; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=Li9Haq3lbAYcxxghlSIeDYQcPLVs1DBkFkgoEN9vaKw=; b=hpWqa1huCaZYtIZuDnCokn2aUgf0qw9ouAfGdgwg7oVR/JKFPQLm60z+2M25LGBOXNJvtJy5U4WUU8qiWa1yVnme87h/r5LdsGMcEjJ+EWFOvIclkY0g8NUXJmScAmTNT9N13f0sTDD69tbXYMOt6cjwQk+re8zbjhOVICdRc1k= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=quarantine dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 176051963404216.195727781300775; Wed, 15 Oct 2025 02:13:54 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.1143396.1477160 (Exim 4.92) (envelope-from ) id 1v8xZd-0004WP-Br; Wed, 15 Oct 2025 09:13:33 +0000 Received: by outflank-mailman (output) from mailman id 1143396.1477160; Wed, 15 Oct 2025 09:13:33 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1v8xZd-0004WB-7e; Wed, 15 Oct 2025 09:13:33 +0000 Received: by outflank-mailman (input) for mailman id 1143396; Wed, 15 Oct 2025 09:13:32 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1v8xZc-0002lR-8u for xen-devel@lists.xenproject.org; Wed, 15 Oct 2025 09:13:32 +0000 Received: from sea.source.kernel.org (sea.source.kernel.org [2600:3c0a:e001:78e:0:1991:8:25]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 377a3d5d-a9a7-11f0-9d15-b5c5bf9af7f9; Wed, 15 Oct 2025 11:13:31 +0200 (CEST) Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sea.source.kernel.org (Postfix) with ESMTP id 1D72F4A2FA; Wed, 15 Oct 2025 09:13:30 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 4C208C4CEF9; Wed, 15 Oct 2025 09:13:29 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 377a3d5d-a9a7-11f0-9d15-b5c5bf9af7f9 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1760519610; bh=xkVzkIBhQQs+OFyzgah2MFOlPHbQvEAh/4FzKyJVicY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Y3t7ehyIT/NlN8L+dYpzHFfN6zyIupuwB48mHivuKnfeqHj7ieKKf/bKb6e2wzQBl 7ep3Q/Q/LIf3dDbwHkIrMphHnObT+41DC3Fgo+k6dOyDbYYAqaRSopAsvbRnb0uLv4 x9cwbhpeKvt4yMX+XCUS0GECEIsJhCSROjiaMhxZibw5n/bN1y2vqjYxP6QGJ6qQoY HZMrZaZkEXlEew59wpjWmAIe/yjEXCWQ0v29XoSJVrAGVba0fAszuLOkckEj4cwcxI 1+dTAwC+hwwf6RaxFXwlpbl+D/N1WB8kMJEYPn1gfu7ootqy1mDJHdBDoepcpmUq8W t1N0Td+msND+g== From: Leon Romanovsky To: Marek Szyprowski , Robin Murphy , Russell King , Juergen Gross , Stefano Stabellini , Oleksandr Tyshchenko , Richard Henderson , Matt Turner , Thomas Bogendoerfer , "James E.J. Bottomley" , Helge Deller , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Geoff Levand , "David S. Miller" , Andreas Larsson , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" Cc: iommu@lists.linux.dev, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, xen-devel@lists.xenproject.org, linux-alpha@vger.kernel.org, linux-mips@vger.kernel.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, sparclinux@vger.kernel.org, Magnus Lindholm , Jason Gunthorpe , Jason Gunthorpe Subject: [PATCH v5 07/14] alpha: Convert mapping routine to rely on physical address Date: Wed, 15 Oct 2025 12:12:53 +0300 Message-ID: <20251015-remove-map-page-v5-7-3bbfe3a25cdf@kernel.org> X-Mailer: git-send-email 2.51.0 In-Reply-To: <20251015-remove-map-page-v5-0-3bbfe3a25cdf@kernel.org> References: <20251015-remove-map-page-v5-0-3bbfe3a25cdf@kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" X-Mailer: b4 0.15-dev Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @kernel.org) X-ZM-MESSAGEID: 1760519635718154100 From: Leon Romanovsky Alpha doesn't need struct *page and can perform mapping based on physical addresses. So convert it to implement new .map_phys callback. As part of this change, remove useless BUG_ON() as DMA mapping layer ensures that right direction is provided. Tested-by: Magnus Lindholm Reviewed-by: Jason Gunthorpe Signed-off-by: Leon Romanovsky --- arch/alpha/kernel/pci_iommu.c | 48 +++++++++++++++++++--------------------= ---- 1 file changed, 21 insertions(+), 27 deletions(-) diff --git a/arch/alpha/kernel/pci_iommu.c b/arch/alpha/kernel/pci_iommu.c index dc91de50f906..955b6ca61627 100644 --- a/arch/alpha/kernel/pci_iommu.c +++ b/arch/alpha/kernel/pci_iommu.c @@ -224,28 +224,26 @@ static int pci_dac_dma_supported(struct pci_dev *dev,= u64 mask) until either pci_unmap_single or pci_dma_sync_single is performed. */ =20 static dma_addr_t -pci_map_single_1(struct pci_dev *pdev, void *cpu_addr, size_t size, +pci_map_single_1(struct pci_dev *pdev, phys_addr_t paddr, size_t size, int dac_allowed) { struct pci_controller *hose =3D pdev ? pdev->sysdata : pci_isa_hose; dma_addr_t max_dma =3D pdev ? pdev->dma_mask : ISA_DMA_MASK; + unsigned long offset =3D offset_in_page(paddr); struct pci_iommu_arena *arena; long npages, dma_ofs, i; - unsigned long paddr; dma_addr_t ret; unsigned int align =3D 0; struct device *dev =3D pdev ? &pdev->dev : NULL; =20 - paddr =3D __pa(cpu_addr); - #if !DEBUG_NODIRECT /* First check to see if we can use the direct map window. */ if (paddr + size + __direct_map_base - 1 <=3D max_dma && paddr + size <=3D __direct_map_size) { ret =3D paddr + __direct_map_base; =20 - DBGA2("pci_map_single: [%p,%zx] -> direct %llx from %ps\n", - cpu_addr, size, ret, __builtin_return_address(0)); + DBGA2("pci_map_single: [%pa,%zx] -> direct %llx from %ps\n", + &paddr, size, ret, __builtin_return_address(0)); =20 return ret; } @@ -255,8 +253,8 @@ pci_map_single_1(struct pci_dev *pdev, void *cpu_addr, = size_t size, if (dac_allowed) { ret =3D paddr + alpha_mv.pci_dac_offset; =20 - DBGA2("pci_map_single: [%p,%zx] -> DAC %llx from %ps\n", - cpu_addr, size, ret, __builtin_return_address(0)); + DBGA2("pci_map_single: [%pa,%zx] -> DAC %llx from %ps\n", + &paddr, size, ret, __builtin_return_address(0)); =20 return ret; } @@ -290,10 +288,10 @@ pci_map_single_1(struct pci_dev *pdev, void *cpu_addr= , size_t size, arena->ptes[i + dma_ofs] =3D mk_iommu_pte(paddr); =20 ret =3D arena->dma_base + dma_ofs * PAGE_SIZE; - ret +=3D (unsigned long)cpu_addr & ~PAGE_MASK; + ret +=3D offset; =20 - DBGA2("pci_map_single: [%p,%zx] np %ld -> sg %llx from %ps\n", - cpu_addr, size, npages, ret, __builtin_return_address(0)); + DBGA2("pci_map_single: [%pa,%zx] np %ld -> sg %llx from %ps\n", + &paddr, size, npages, ret, __builtin_return_address(0)); =20 return ret; } @@ -322,19 +320,18 @@ static struct pci_dev *alpha_gendev_to_pci(struct dev= ice *dev) return NULL; } =20 -static dma_addr_t alpha_pci_map_page(struct device *dev, struct page *page, - unsigned long offset, size_t size, - enum dma_data_direction dir, +static dma_addr_t alpha_pci_map_phys(struct device *dev, phys_addr_t phys, + size_t size, enum dma_data_direction dir, unsigned long attrs) { struct pci_dev *pdev =3D alpha_gendev_to_pci(dev); int dac_allowed; =20 - BUG_ON(dir =3D=3D DMA_NONE); + if (unlikely(attrs & DMA_ATTR_MMIO)) + return DMA_MAPPING_ERROR; =20 - dac_allowed =3D pdev ? pci_dac_dma_supported(pdev, pdev->dma_mask) : 0;=20 - return pci_map_single_1(pdev, (char *)page_address(page) + offset,=20 - size, dac_allowed); + dac_allowed =3D pdev ? pci_dac_dma_supported(pdev, pdev->dma_mask) : 0; + return pci_map_single_1(pdev, phys, size, dac_allowed); } =20 /* Unmap a single streaming mode DMA translation. The DMA_ADDR and @@ -343,7 +340,7 @@ static dma_addr_t alpha_pci_map_page(struct device *dev= , struct page *page, the cpu to the buffer are guaranteed to see whatever the device wrote there. */ =20 -static void alpha_pci_unmap_page(struct device *dev, dma_addr_t dma_addr, +static void alpha_pci_unmap_phys(struct device *dev, dma_addr_t dma_addr, size_t size, enum dma_data_direction dir, unsigned long attrs) { @@ -353,8 +350,6 @@ static void alpha_pci_unmap_page(struct device *dev, dm= a_addr_t dma_addr, struct pci_iommu_arena *arena; long dma_ofs, npages; =20 - BUG_ON(dir =3D=3D DMA_NONE); - if (dma_addr >=3D __direct_map_base && dma_addr < __direct_map_base + __direct_map_size) { /* Nothing to do. */ @@ -429,7 +424,7 @@ static void *alpha_pci_alloc_coherent(struct device *de= v, size_t size, } memset(cpu_addr, 0, size); =20 - *dma_addrp =3D pci_map_single_1(pdev, cpu_addr, size, 0); + *dma_addrp =3D pci_map_single_1(pdev, virt_to_phys(cpu_addr), size, 0); if (*dma_addrp =3D=3D DMA_MAPPING_ERROR) { free_pages((unsigned long)cpu_addr, order); if (alpha_mv.mv_pci_tbi || (gfp & GFP_DMA)) @@ -643,9 +638,8 @@ static int alpha_pci_map_sg(struct device *dev, struct = scatterlist *sg, /* Fast path single entry scatterlists. */ if (nents =3D=3D 1) { sg->dma_length =3D sg->length; - sg->dma_address - =3D pci_map_single_1(pdev, SG_ENT_VIRT_ADDRESS(sg), - sg->length, dac_allowed); + sg->dma_address =3D pci_map_single_1(pdev, sg_phys(sg), + sg->length, dac_allowed); if (sg->dma_address =3D=3D DMA_MAPPING_ERROR) return -EIO; return 1; @@ -917,8 +911,8 @@ iommu_unbind(struct pci_iommu_arena *arena, long pg_sta= rt, long pg_count) const struct dma_map_ops alpha_pci_ops =3D { .alloc =3D alpha_pci_alloc_coherent, .free =3D alpha_pci_free_coherent, - .map_page =3D alpha_pci_map_page, - .unmap_page =3D alpha_pci_unmap_page, + .map_phys =3D alpha_pci_map_phys, + .unmap_phys =3D alpha_pci_unmap_phys, .map_sg =3D alpha_pci_map_sg, .unmap_sg =3D alpha_pci_unmap_sg, .dma_supported =3D alpha_pci_supported, --=20 2.51.0 From nobody Thu Oct 30 18:20:50 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=quarantine dis=none) header.from=kernel.org ARC-Seal: i=1; a=rsa-sha256; t=1760519635; cv=none; d=zohomail.com; s=zohoarc; b=Sjup3v2rqLgrWzAntjUmnlhrbwRmmmzhy+qcFok8EfVjFzjjca0lDzIX58J2lCQ80o5JIpOH9FOt6e2QbOFb51jL4orogor49fYBdFr3iNcADk3wEsFoKAkifv3WAtwUAzG3fbljGmQWP7Q6UHEqnyZE7lgI4IUjbiaIv3WaRY0= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1760519635; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=4bgfPlYm5NnLREwcTJMCLZzRzJFsxcYwx/hKPzErniI=; b=kHCiDQF5qQYAXKjcwFXk/VcZEwm26WHCreLmVwXd046invgeY1MA099l4V6wqAbEXoPDw0xtJN8WMGFDYeXZntYlzR7n7cqt21ZNaGZ1EJ0xI1Q6z+59qa9plfLJr7qzgC6ss6NN35Q7AcW0lNZkdpHKfjqBRInhJPi3SWTS8+g= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=quarantine dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1760519635938502.27178158670563; Wed, 15 Oct 2025 02:13:55 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.1143410.1477170 (Exim 4.92) (envelope-from ) id 1v8xZi-00057h-Iw; Wed, 15 Oct 2025 09:13:38 +0000 Received: by outflank-mailman (output) from mailman id 1143410.1477170; Wed, 15 Oct 2025 09:13:38 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1v8xZi-00057O-Ff; Wed, 15 Oct 2025 09:13:38 +0000 Received: by outflank-mailman (input) for mailman id 1143410; Wed, 15 Oct 2025 09:13:37 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1v8xZh-0002lL-3v for xen-devel@lists.xenproject.org; Wed, 15 Oct 2025 09:13:37 +0000 Received: from tor.source.kernel.org (tor.source.kernel.org [2600:3c04:e001:324:0:1991:8:25]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 39aab269-a9a7-11f0-980a-7dc792cee155; Wed, 15 Oct 2025 11:13:34 +0200 (CEST) Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by tor.source.kernel.org (Postfix) with ESMTP id C98CB63BF7; Wed, 15 Oct 2025 09:13:33 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id CED5FC4CEF9; Wed, 15 Oct 2025 09:13:32 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 39aab269-a9a7-11f0-980a-7dc792cee155 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1760519613; bh=20pkhPTluVu52SbPzmxmrTfHrn6yEtG7pWga+DsrWIc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Ig53rDzt8hiBzZZ+zdzkS7cx6K4XFlPakt74zliZ2VuFOxA/v+TKFBvFTWoou1sUN q53E6CiOe35aym9Eqm8SnAer8W1OFJFVkHe3wWEFH4Ithjt31CpiK1AAPn+wYcvsWw LMgdDn2fvxI1mJluK4lYZ292gEC9pr+xl8CZRpwwmz64D7xTeEPTRhwbKU6VeCcRVk LtLSCMxyCZQlEPmtLI/pgkfz+kbl+r9ceDLpdIznTuwex+mPkjdDZ9f8md8PKxM9C4 eroP4W9zJU2vT9cO8vlBtQn8KWZOrYYGD9/qePUDi07GGe7B84+AoJ0M7EyrOLjuu+ lNptFD7tgV7qw== From: Leon Romanovsky To: Marek Szyprowski , Robin Murphy , Russell King , Juergen Gross , Stefano Stabellini , Oleksandr Tyshchenko , Richard Henderson , Matt Turner , Thomas Bogendoerfer , "James E.J. Bottomley" , Helge Deller , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Geoff Levand , "David S. Miller" , Andreas Larsson , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" Cc: iommu@lists.linux.dev, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, xen-devel@lists.xenproject.org, linux-alpha@vger.kernel.org, linux-mips@vger.kernel.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, sparclinux@vger.kernel.org, Jason Gunthorpe , Jason Gunthorpe Subject: [PATCH v5 08/14] MIPS/jazzdma: Provide physical address directly Date: Wed, 15 Oct 2025 12:12:54 +0300 Message-ID: <20251015-remove-map-page-v5-8-3bbfe3a25cdf@kernel.org> X-Mailer: git-send-email 2.51.0 In-Reply-To: <20251015-remove-map-page-v5-0-3bbfe3a25cdf@kernel.org> References: <20251015-remove-map-page-v5-0-3bbfe3a25cdf@kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" X-Mailer: b4 0.15-dev Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @kernel.org) X-ZM-MESSAGEID: 1760519637605158500 From: Leon Romanovsky MIPS jazz uses physical addresses for mapping pages, so convert it to get them directly from DMA mapping routine. Reviewed-by: Jason Gunthorpe Signed-off-by: Leon Romanovsky --- arch/mips/jazz/jazzdma.c | 20 +++++++++++++------- 1 file changed, 13 insertions(+), 7 deletions(-) diff --git a/arch/mips/jazz/jazzdma.c b/arch/mips/jazz/jazzdma.c index c97b089b9902..eb9fb2f2a720 100644 --- a/arch/mips/jazz/jazzdma.c +++ b/arch/mips/jazz/jazzdma.c @@ -521,18 +521,24 @@ static void jazz_dma_free(struct device *dev, size_t = size, void *vaddr, __free_pages(virt_to_page(vaddr), get_order(size)); } =20 -static dma_addr_t jazz_dma_map_page(struct device *dev, struct page *page, - unsigned long offset, size_t size, enum dma_data_direction dir, - unsigned long attrs) +static dma_addr_t jazz_dma_map_phys(struct device *dev, phys_addr_t phys, + size_t size, enum dma_data_direction dir, unsigned long attrs) { - phys_addr_t phys =3D page_to_phys(page) + offset; + if (unlikely(attrs & DMA_ATTR_MMIO)) + /* + * This check is included because older versions of the code lacked + * MMIO path support, and my ability to test this path is limited. + * However, from a software technical standpoint, there is no restrictio= n, + * as the following code operates solely on physical addresses. + */ + return DMA_MAPPING_ERROR; =20 if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC)) arch_sync_dma_for_device(phys, size, dir); return vdma_alloc(phys, size); } =20 -static void jazz_dma_unmap_page(struct device *dev, dma_addr_t dma_addr, +static void jazz_dma_unmap_phys(struct device *dev, dma_addr_t dma_addr, size_t size, enum dma_data_direction dir, unsigned long attrs) { if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC)) @@ -607,8 +613,8 @@ static void jazz_dma_sync_sg_for_cpu(struct device *dev, const struct dma_map_ops jazz_dma_ops =3D { .alloc =3D jazz_dma_alloc, .free =3D jazz_dma_free, - .map_page =3D jazz_dma_map_page, - .unmap_page =3D jazz_dma_unmap_page, + .map_phys =3D jazz_dma_map_phys, + .unmap_phys =3D jazz_dma_unmap_phys, .map_sg =3D jazz_dma_map_sg, .unmap_sg =3D jazz_dma_unmap_sg, .sync_single_for_cpu =3D jazz_dma_sync_single_for_cpu, --=20 2.51.0 From nobody Thu Oct 30 18:20:50 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=quarantine dis=none) header.from=kernel.org ARC-Seal: i=1; a=rsa-sha256; t=1760519796; cv=none; d=zohomail.com; s=zohoarc; b=PY7CUfe1Cbmbqt5qnzRRxsQX0l3PgNcntZxJKiJuUgh+Z+zYIXmogZhcbCzB6YgdYFRQ8OZtQJyOlE0Rej3x2/igB1MI2d4SpJdLj0kvLeKm+ynvoTMLnpQLyAXFo+hGOTC4oST+VjIGPhL7zfpa0lhpEf3YyfVdzVd6BlUZMOU= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1760519796; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=/sMLccLPuipTIQIq+QGkG1/Qfpgp7OHUnhyOy5PjXZo=; b=gH0xGCroGLLb3PwyRcAugAycXeYCpqKPHG4Wv5H9j7igJE+ZfSCzSjgNvJ/kpVXvys2G1HkSjfWAVJLLEQs2YSURFzYIUEgV6Nm/B7GOamW8UnjghP9tv1nUX6ycJ2DYDftDqUktyhaUaTt216hrkym42n1vpcAKuX/adBgWP00= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=quarantine dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1760519796356577.4668249451637; Wed, 15 Oct 2025 02:16:36 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.1143474.1477229 (Exim 4.92) (envelope-from ) id 1v8xcL-0000Cu-JQ; Wed, 15 Oct 2025 09:16:21 +0000 Received: by outflank-mailman (output) from mailman id 1143474.1477229; Wed, 15 Oct 2025 09:16:21 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1v8xcL-0000C1-BP; Wed, 15 Oct 2025 09:16:21 +0000 Received: by outflank-mailman (input) for mailman id 1143474; Wed, 15 Oct 2025 09:16:19 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1v8xa1-0002lR-IV for xen-devel@lists.xenproject.org; Wed, 15 Oct 2025 09:13:57 +0000 Received: from sea.source.kernel.org (sea.source.kernel.org [2600:3c0a:e001:78e:0:1991:8:25]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 464604fa-a9a7-11f0-9d15-b5c5bf9af7f9; Wed, 15 Oct 2025 11:13:56 +0200 (CEST) Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sea.source.kernel.org (Postfix) with ESMTP id D02CE417E5; Wed, 15 Oct 2025 09:13:54 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0CE55C19424; Wed, 15 Oct 2025 09:13:54 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 464604fa-a9a7-11f0-9d15-b5c5bf9af7f9 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1760519634; bh=SfEJhe+EnkJnefkh97fPmSPXkeIOd3xlETD9YgBQ46Y=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=VjKEdt2mxlWT6w6IHTqoZRpMLcMuI77UbOT5L24qqSlM5/DRGv3AcjxM4yMGcf+EA z3mSEYZ6gFFWrxSO7HRPPBIgkFJ2KK8npgJULFQ3REwzcltzHG0visRTUD4RkVR8p2 iVTzUpaitF9orJXdg7vwdxwmxOVT299ieYtlvyB/8fvGgyB/AQShYv0rnCdNMAXe9R ZB3OwN86eHCPctYuE8Jsplg/jEev2Xxs2kFERH9JQ4NIoTovdwTglFTYtRMQcmdBAV BE5Q3ZMOHIBzXlABIk4k32PRk815Smbd9I+JsegH2figO4R/XC82Wkl31Ze4zN26de GcSuYHdBLk/jA== From: Leon Romanovsky To: Marek Szyprowski , Robin Murphy , Russell King , Juergen Gross , Stefano Stabellini , Oleksandr Tyshchenko , Richard Henderson , Matt Turner , Thomas Bogendoerfer , "James E.J. Bottomley" , Helge Deller , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Geoff Levand , "David S. Miller" , Andreas Larsson , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" Cc: iommu@lists.linux.dev, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, xen-devel@lists.xenproject.org, linux-alpha@vger.kernel.org, linux-mips@vger.kernel.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, sparclinux@vger.kernel.org, Jason Gunthorpe , Jason Gunthorpe Subject: [PATCH v5 09/14] parisc: Convert DMA map_page to map_phys interface Date: Wed, 15 Oct 2025 12:12:55 +0300 Message-ID: <20251015-remove-map-page-v5-9-3bbfe3a25cdf@kernel.org> X-Mailer: git-send-email 2.51.0 In-Reply-To: <20251015-remove-map-page-v5-0-3bbfe3a25cdf@kernel.org> References: <20251015-remove-map-page-v5-0-3bbfe3a25cdf@kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" X-Mailer: b4 0.15-dev Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @kernel.org) X-ZM-MESSAGEID: 1760519797403158500 From: Leon Romanovsky Perform mechanical conversion from .map_page to .map_phys callback. Reviewed-by: Jason Gunthorpe Signed-off-by: Leon Romanovsky --- drivers/parisc/ccio-dma.c | 54 ++++++++++++++++++++++----------------= ---- drivers/parisc/iommu-helpers.h | 10 ++++---- drivers/parisc/sba_iommu.c | 54 ++++++++++++++++++++------------------= ---- 3 files changed, 59 insertions(+), 59 deletions(-) diff --git a/drivers/parisc/ccio-dma.c b/drivers/parisc/ccio-dma.c index feef537257d0..4e7071714356 100644 --- a/drivers/parisc/ccio-dma.c +++ b/drivers/parisc/ccio-dma.c @@ -517,10 +517,10 @@ static u32 hint_lookup[] =3D { * ccio_io_pdir_entry - Initialize an I/O Pdir. * @pdir_ptr: A pointer into I/O Pdir. * @sid: The Space Identifier. - * @vba: The virtual address. + * @pba: The physical address. * @hints: The DMA Hint. * - * Given a virtual address (vba, arg2) and space id, (sid, arg1), + * Given a physical address (pba, arg2) and space id, (sid, arg1), * load the I/O PDIR entry pointed to by pdir_ptr (arg0). Each IO Pdir * entry consists of 8 bytes as shown below (MSB =3D=3D bit 0): * @@ -543,7 +543,7 @@ static u32 hint_lookup[] =3D { * index are bits 12:19 of the value returned by LCI. */=20 static void -ccio_io_pdir_entry(__le64 *pdir_ptr, space_t sid, unsigned long vba, +ccio_io_pdir_entry(__le64 *pdir_ptr, space_t sid, phys_addr_t pba, unsigned long hints) { register unsigned long pa; @@ -557,7 +557,7 @@ ccio_io_pdir_entry(__le64 *pdir_ptr, space_t sid, unsig= ned long vba, ** "hints" parm includes the VALID bit! ** "dep" clobbers the physical address offset bits as well. */ - pa =3D lpa(vba); + pa =3D pba; asm volatile("depw %1,31,12,%0" : "+r" (pa) : "r" (hints)); ((u32 *)pdir_ptr)[1] =3D (u32) pa; =20 @@ -582,7 +582,7 @@ ccio_io_pdir_entry(__le64 *pdir_ptr, space_t sid, unsig= ned long vba, ** Grab virtual index [0:11] ** Deposit virt_idx bits into I/O PDIR word */ - asm volatile ("lci %%r0(%1), %0" : "=3Dr" (ci) : "r" (vba)); + asm volatile ("lci %%r0(%1), %0" : "=3Dr" (ci) : "r" (phys_to_virt(pba))); asm volatile ("extru %1,19,12,%0" : "+r" (ci) : "r" (ci)); asm volatile ("depw %1,15,12,%0" : "+r" (pa) : "r" (ci)); =20 @@ -704,14 +704,14 @@ ccio_dma_supported(struct device *dev, u64 mask) /** * ccio_map_single - Map an address range into the IOMMU. * @dev: The PCI device. - * @addr: The start address of the DMA region. + * @addr: The physical address of the DMA region. * @size: The length of the DMA region. * @direction: The direction of the DMA transaction (to/from device). * * This function implements the pci_map_single function. */ static dma_addr_t=20 -ccio_map_single(struct device *dev, void *addr, size_t size, +ccio_map_single(struct device *dev, phys_addr_t addr, size_t size, enum dma_data_direction direction) { int idx; @@ -730,7 +730,7 @@ ccio_map_single(struct device *dev, void *addr, size_t = size, BUG_ON(size <=3D 0); =20 /* save offset bits */ - offset =3D ((unsigned long) addr) & ~IOVP_MASK; + offset =3D offset_in_page(addr); =20 /* round up to nearest IOVP_SIZE */ size =3D ALIGN(size + offset, IOVP_SIZE); @@ -746,15 +746,15 @@ ccio_map_single(struct device *dev, void *addr, size_= t size, =20 pdir_start =3D &(ioc->pdir_base[idx]); =20 - DBG_RUN("%s() %px -> %#lx size: %zu\n", - __func__, addr, (long)(iovp | offset), size); + DBG_RUN("%s() %pa -> %#lx size: %zu\n", + __func__, &addr, (long)(iovp | offset), size); =20 /* If not cacheline aligned, force SAFE_DMA on the whole mess */ - if((size % L1_CACHE_BYTES) || ((unsigned long)addr % L1_CACHE_BYTES)) + if ((size % L1_CACHE_BYTES) || (addr % L1_CACHE_BYTES)) hint |=3D HINT_SAFE_DMA; =20 while(size > 0) { - ccio_io_pdir_entry(pdir_start, KERNEL_SPACE, (unsigned long)addr, hint); + ccio_io_pdir_entry(pdir_start, KERNEL_SPACE, addr, hint); =20 DBG_RUN(" pdir %p %08x%08x\n", pdir_start, @@ -773,17 +773,18 @@ ccio_map_single(struct device *dev, void *addr, size_= t size, =20 =20 static dma_addr_t -ccio_map_page(struct device *dev, struct page *page, unsigned long offset, - size_t size, enum dma_data_direction direction, - unsigned long attrs) +ccio_map_phys(struct device *dev, phys_addr_t phys, size_t size, + enum dma_data_direction direction, unsigned long attrs) { - return ccio_map_single(dev, page_address(page) + offset, size, - direction); + if (unlikely(attrs & DMA_ATTR_MMIO)) + return DMA_MAPPING_ERROR; + + return ccio_map_single(dev, phys, size, direction); } =20 =20 /** - * ccio_unmap_page - Unmap an address range from the IOMMU. + * ccio_unmap_phys - Unmap an address range from the IOMMU. * @dev: The PCI device. * @iova: The start address of the DMA region. * @size: The length of the DMA region. @@ -791,7 +792,7 @@ ccio_map_page(struct device *dev, struct page *page, un= signed long offset, * @attrs: attributes */ static void=20 -ccio_unmap_page(struct device *dev, dma_addr_t iova, size_t size, +ccio_unmap_phys(struct device *dev, dma_addr_t iova, size_t size, enum dma_data_direction direction, unsigned long attrs) { struct ioc *ioc; @@ -853,7 +854,8 @@ ccio_alloc(struct device *dev, size_t size, dma_addr_t = *dma_handle, gfp_t flag, =20 if (ret) { memset(ret, 0, size); - *dma_handle =3D ccio_map_single(dev, ret, size, DMA_BIDIRECTIONAL); + *dma_handle =3D ccio_map_single(dev, virt_to_phys(ret), size, + DMA_BIDIRECTIONAL); } =20 return ret; @@ -873,7 +875,7 @@ static void ccio_free(struct device *dev, size_t size, void *cpu_addr, dma_addr_t dma_handle, unsigned long attrs) { - ccio_unmap_page(dev, dma_handle, size, 0, 0); + ccio_unmap_phys(dev, dma_handle, size, 0, 0); free_pages((unsigned long)cpu_addr, get_order(size)); } =20 @@ -920,7 +922,7 @@ ccio_map_sg(struct device *dev, struct scatterlist *sgl= ist, int nents, /* Fast path single entry scatterlists. */ if (nents =3D=3D 1) { sg_dma_address(sglist) =3D ccio_map_single(dev, - sg_virt(sglist), sglist->length, + sg_phys(sglist), sglist->length, direction); sg_dma_len(sglist) =3D sglist->length; return 1; @@ -1004,7 +1006,7 @@ ccio_unmap_sg(struct device *dev, struct scatterlist = *sglist, int nents, #ifdef CCIO_COLLECT_STATS ioc->usg_pages +=3D sg_dma_len(sglist) >> PAGE_SHIFT; #endif - ccio_unmap_page(dev, sg_dma_address(sglist), + ccio_unmap_phys(dev, sg_dma_address(sglist), sg_dma_len(sglist), direction, 0); ++sglist; nents--; @@ -1017,8 +1019,8 @@ static const struct dma_map_ops ccio_ops =3D { .dma_supported =3D ccio_dma_supported, .alloc =3D ccio_alloc, .free =3D ccio_free, - .map_page =3D ccio_map_page, - .unmap_page =3D ccio_unmap_page, + .map_phys =3D ccio_map_phys, + .unmap_phys =3D ccio_unmap_phys, .map_sg =3D ccio_map_sg, .unmap_sg =3D ccio_unmap_sg, .get_sgtable =3D dma_common_get_sgtable, @@ -1072,7 +1074,7 @@ static int ccio_proc_info(struct seq_file *m, void *p) ioc->msingle_calls, ioc->msingle_pages, (int)((ioc->msingle_pages * 1000)/ioc->msingle_calls)); =20 - /* KLUGE - unmap_sg calls unmap_page for each mapped page */ + /* KLUGE - unmap_sg calls unmap_phys for each mapped page */ min =3D ioc->usingle_calls - ioc->usg_calls; max =3D ioc->usingle_pages - ioc->usg_pages; seq_printf(m, "pci_unmap_single: %8ld calls %8ld pages (avg %d/1000)\n", diff --git a/drivers/parisc/iommu-helpers.h b/drivers/parisc/iommu-helpers.h index c43f1a212a5c..0691884f5095 100644 --- a/drivers/parisc/iommu-helpers.h +++ b/drivers/parisc/iommu-helpers.h @@ -14,7 +14,7 @@ static inline unsigned int iommu_fill_pdir(struct ioc *ioc, struct scatterlist *startsg, int nents,=20 unsigned long hint, - void (*iommu_io_pdir_entry)(__le64 *, space_t, unsigned long, + void (*iommu_io_pdir_entry)(__le64 *, space_t, phys_addr_t, unsigned long)) { struct scatterlist *dma_sg =3D startsg; /* pointer to current DMA */ @@ -28,7 +28,7 @@ iommu_fill_pdir(struct ioc *ioc, struct scatterlist *star= tsg, int nents, dma_sg--; =20 while (nents-- > 0) { - unsigned long vaddr; + phys_addr_t paddr; long size; =20 DBG_RUN_SG(" %d : %08lx %p/%05x\n", nents, @@ -67,7 +67,7 @@ iommu_fill_pdir(struct ioc *ioc, struct scatterlist *star= tsg, int nents, =09 BUG_ON(pdirp =3D=3D NULL); =09 - vaddr =3D (unsigned long)sg_virt(startsg); + paddr =3D sg_phys(startsg); sg_dma_len(dma_sg) +=3D startsg->length; size =3D startsg->length + dma_offset; dma_offset =3D 0; @@ -76,8 +76,8 @@ iommu_fill_pdir(struct ioc *ioc, struct scatterlist *star= tsg, int nents, #endif do { iommu_io_pdir_entry(pdirp, KERNEL_SPACE,=20 - vaddr, hint); - vaddr +=3D IOVP_SIZE; + paddr, hint); + paddr +=3D IOVP_SIZE; size -=3D IOVP_SIZE; pdirp++; } while(unlikely(size > 0)); diff --git a/drivers/parisc/sba_iommu.c b/drivers/parisc/sba_iommu.c index fc3863c09f83..a6eb6bffa5ea 100644 --- a/drivers/parisc/sba_iommu.c +++ b/drivers/parisc/sba_iommu.c @@ -532,7 +532,7 @@ typedef unsigned long space_t; * sba_io_pdir_entry - fill in one IO PDIR entry * @pdir_ptr: pointer to IO PDIR entry * @sid: process Space ID - currently only support KERNEL_SPACE - * @vba: Virtual CPU address of buffer to map + * @pba: Physical address of buffer to map * @hint: DMA hint set to use for this mapping * * SBA Mapping Routine @@ -569,20 +569,17 @@ typedef unsigned long space_t; */ =20 static void -sba_io_pdir_entry(__le64 *pdir_ptr, space_t sid, unsigned long vba, +sba_io_pdir_entry(__le64 *pdir_ptr, space_t sid, phys_addr_t pba, unsigned long hint) { - u64 pa; /* physical address */ register unsigned ci; /* coherent index */ =20 - pa =3D lpa(vba); - pa &=3D IOVP_MASK; + asm("lci 0(%1), %0" : "=3Dr" (ci) : "r" (phys_to_virt(pba))); + pba &=3D IOVP_MASK; + pba |=3D (ci >> PAGE_SHIFT) & 0xff; /* move CI (8 bits) into lowest byte= */ =20 - asm("lci 0(%1), %0" : "=3Dr" (ci) : "r" (vba)); - pa |=3D (ci >> PAGE_SHIFT) & 0xff; /* move CI (8 bits) into lowest byte = */ - - pa |=3D SBA_PDIR_VALID_BIT; /* set "valid" bit */ - *pdir_ptr =3D cpu_to_le64(pa); /* swap and store into I/O Pdir */ + pba |=3D SBA_PDIR_VALID_BIT; /* set "valid" bit */ + *pdir_ptr =3D cpu_to_le64(pba); /* swap and store into I/O Pdir */ =20 /* * If the PDC_MODEL capabilities has Non-coherent IO-PDIR bit set @@ -707,7 +704,7 @@ static int sba_dma_supported( struct device *dev, u64 m= ask) * See Documentation/core-api/dma-api-howto.rst */ static dma_addr_t -sba_map_single(struct device *dev, void *addr, size_t size, +sba_map_single(struct device *dev, phys_addr_t addr, size_t size, enum dma_data_direction direction) { struct ioc *ioc; @@ -722,7 +719,7 @@ sba_map_single(struct device *dev, void *addr, size_t s= ize, return DMA_MAPPING_ERROR; =20 /* save offset bits */ - offset =3D ((dma_addr_t) (long) addr) & ~IOVP_MASK; + offset =3D offset_in_page(addr); =20 /* round up to nearest IOVP_SIZE */ size =3D (size + offset + ~IOVP_MASK) & IOVP_MASK; @@ -739,13 +736,13 @@ sba_map_single(struct device *dev, void *addr, size_t= size, pide =3D sba_alloc_range(ioc, dev, size); iovp =3D (dma_addr_t) pide << IOVP_SHIFT; =20 - DBG_RUN("%s() 0x%p -> 0x%lx\n", - __func__, addr, (long) iovp | offset); + DBG_RUN("%s() 0x%pa -> 0x%lx\n", + __func__, &addr, (long) iovp | offset); =20 pdir_start =3D &(ioc->pdir_base[pide]); =20 while (size > 0) { - sba_io_pdir_entry(pdir_start, KERNEL_SPACE, (unsigned long) addr, 0); + sba_io_pdir_entry(pdir_start, KERNEL_SPACE, addr, 0); =20 DBG_RUN(" pdir 0x%p %02x%02x%02x%02x%02x%02x%02x%02x\n", pdir_start, @@ -778,17 +775,18 @@ sba_map_single(struct device *dev, void *addr, size_t= size, =20 =20 static dma_addr_t -sba_map_page(struct device *dev, struct page *page, unsigned long offset, - size_t size, enum dma_data_direction direction, - unsigned long attrs) +sba_map_phys(struct device *dev, phys_addr_t phys, size_t size, + enum dma_data_direction direction, unsigned long attrs) { - return sba_map_single(dev, page_address(page) + offset, size, - direction); + if (unlikely(attrs & DMA_ATTR_MMIO)) + return DMA_MAPPING_ERROR; + + return sba_map_single(dev, phys, size, direction); } =20 =20 /** - * sba_unmap_page - unmap one IOVA and free resources + * sba_unmap_phys - unmap one IOVA and free resources * @dev: instance of PCI owned by the driver that's asking. * @iova: IOVA of driver buffer previously mapped. * @size: number of bytes mapped in driver buffer. @@ -798,7 +796,7 @@ sba_map_page(struct device *dev, struct page *page, uns= igned long offset, * See Documentation/core-api/dma-api-howto.rst */ static void -sba_unmap_page(struct device *dev, dma_addr_t iova, size_t size, +sba_unmap_phys(struct device *dev, dma_addr_t iova, size_t size, enum dma_data_direction direction, unsigned long attrs) { struct ioc *ioc; @@ -893,7 +891,7 @@ static void *sba_alloc(struct device *hwdev, size_t siz= e, dma_addr_t *dma_handle =20 if (ret) { memset(ret, 0, size); - *dma_handle =3D sba_map_single(hwdev, ret, size, 0); + *dma_handle =3D sba_map_single(hwdev, virt_to_phys(ret), size, 0); } =20 return ret; @@ -914,7 +912,7 @@ static void sba_free(struct device *hwdev, size_t size, void *vaddr, dma_addr_t dma_handle, unsigned long attrs) { - sba_unmap_page(hwdev, dma_handle, size, 0, 0); + sba_unmap_phys(hwdev, dma_handle, size, 0, 0); free_pages((unsigned long) vaddr, get_order(size)); } =20 @@ -962,7 +960,7 @@ sba_map_sg(struct device *dev, struct scatterlist *sgli= st, int nents, =20 /* Fast path single entry scatterlists. */ if (nents =3D=3D 1) { - sg_dma_address(sglist) =3D sba_map_single(dev, sg_virt(sglist), + sg_dma_address(sglist) =3D sba_map_single(dev, sg_phys(sglist), sglist->length, direction); sg_dma_len(sglist) =3D sglist->length; return 1; @@ -1061,7 +1059,7 @@ sba_unmap_sg(struct device *dev, struct scatterlist *= sglist, int nents, =20 while (nents && sg_dma_len(sglist)) { =20 - sba_unmap_page(dev, sg_dma_address(sglist), sg_dma_len(sglist), + sba_unmap_phys(dev, sg_dma_address(sglist), sg_dma_len(sglist), direction, 0); #ifdef SBA_COLLECT_STATS ioc->usg_pages +=3D ((sg_dma_address(sglist) & ~IOVP_MASK) + sg_dma_len(= sglist) + IOVP_SIZE - 1) >> PAGE_SHIFT; @@ -1085,8 +1083,8 @@ static const struct dma_map_ops sba_ops =3D { .dma_supported =3D sba_dma_supported, .alloc =3D sba_alloc, .free =3D sba_free, - .map_page =3D sba_map_page, - .unmap_page =3D sba_unmap_page, + .map_phys =3D sba_map_phys, + .unmap_phys =3D sba_unmap_phys, .map_sg =3D sba_map_sg, .unmap_sg =3D sba_unmap_sg, .get_sgtable =3D dma_common_get_sgtable, --=20 2.51.0 From nobody Thu Oct 30 18:20:50 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=quarantine dis=none) header.from=kernel.org ARC-Seal: i=1; a=rsa-sha256; t=1760519794; cv=none; d=zohomail.com; s=zohoarc; b=ObwSL0yFd/kYkb4hn8SQ8gWUyOu+2TZow/hH2sylMuQOmREn0FqncaSOa9MQGSQHZqYvNUybf64cR8AZ4/R03K15Z87haOmsHHyws1vHRpiSZ8Vb7grbtvqxG5T4pq+feKLse8EJUPuaA7rV9rCZVsAObzaa1Ic5r4c3kUoxGVI= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1760519794; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=51SmkYvClEdI+Ca2UfIVeaUdjvbf9MpogJjDj7Q3Iz8=; b=PKlVQByKNJwfsqJBkJFFgmThgVVNl9Jrd0RiQuY2dKkbpL/Ptr/a3G5bezBxUjSVLIU0Sp3FJ8Nd8C8QPbWwgbHpDTzn9RFqrsdIun2dDqO5ci+gyp80Jle/vAQ24GP/7g0HmECpskSOOx9CtY2298Iwt3uGy3C8R0/YHG1JU4Q= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=quarantine dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1760519794251352.2302153897293; Wed, 15 Oct 2025 02:16:34 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.1143470.1477215 (Exim 4.92) (envelope-from ) id 1v8xcK-0008Ca-8v; Wed, 15 Oct 2025 09:16:20 +0000 Received: by outflank-mailman (output) from mailman id 1143470.1477215; Wed, 15 Oct 2025 09:16:20 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1v8xcK-0008B4-1C; Wed, 15 Oct 2025 09:16:20 +0000 Received: by outflank-mailman (input) for mailman id 1143470; Wed, 15 Oct 2025 09:16:18 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1v8xZn-0002lR-CP for xen-devel@lists.xenproject.org; Wed, 15 Oct 2025 09:13:43 +0000 Received: from sea.source.kernel.org (sea.source.kernel.org [172.234.252.31]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 3dc9e1a0-a9a7-11f0-9d15-b5c5bf9af7f9; Wed, 15 Oct 2025 11:13:42 +0200 (CEST) Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sea.source.kernel.org (Postfix) with ESMTP id A8989437D4; Wed, 15 Oct 2025 09:13:40 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id D7CEEC4CEF8; Wed, 15 Oct 2025 09:13:39 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 3dc9e1a0-a9a7-11f0-9d15-b5c5bf9af7f9 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1760519620; bh=Qk6GVbNYp0zCxRraEnpdrSWtbzmnSX3KsqwxS045G4I=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=kwjaHQAs6qi10IRHkP34FFu6MB838DaoM8BrIjZY2CPZS7lp8Wblk2gg6bkGTk5lg 1Y9Hvo2H+k62AGJYGUBHR0iIYlzWWKhEaGF6gL+L64z09yf2TQfkgeMMRCe1bu+0V9 pHCSY+A7kBzRB+b1Q7HKRWK8rw8/CcA/Bgwp1ndA9fYO9kHGfneioqqgO7K0y5HyoD FRo7qyuc8WK0T6efc4Pw7KN8CoAzPBgjCfG8FLEhv68B6r+MAQkv/LWEXl5WOgPQGj VrHi+69EJYWiJK2W73MZeeNs1blfvAxcQb2LFya/g/OKfxsRdq4sypxZBlbPf9w3Mm kHbw6vnAwnB9A== From: Leon Romanovsky To: Marek Szyprowski , Robin Murphy , Russell King , Juergen Gross , Stefano Stabellini , Oleksandr Tyshchenko , Richard Henderson , Matt Turner , Thomas Bogendoerfer , "James E.J. Bottomley" , Helge Deller , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Geoff Levand , "David S. Miller" , Andreas Larsson , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" Cc: iommu@lists.linux.dev, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, xen-devel@lists.xenproject.org, linux-alpha@vger.kernel.org, linux-mips@vger.kernel.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, sparclinux@vger.kernel.org Subject: [PATCH v5 10/14] powerpc: Convert to physical address DMA mapping Date: Wed, 15 Oct 2025 12:12:56 +0300 Message-ID: <20251015-remove-map-page-v5-10-3bbfe3a25cdf@kernel.org> X-Mailer: git-send-email 2.51.0 In-Reply-To: <20251015-remove-map-page-v5-0-3bbfe3a25cdf@kernel.org> References: <20251015-remove-map-page-v5-0-3bbfe3a25cdf@kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" X-Mailer: b4 0.15-dev Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @kernel.org) X-ZM-MESSAGEID: 1760519795651154100 From: Leon Romanovsky Adapt PowerPC DMA to use physical addresses in order to prepare code to removal .map_page and .unmap_page. Signed-off-by: Leon Romanovsky --- arch/powerpc/include/asm/iommu.h | 8 ++++---- arch/powerpc/kernel/dma-iommu.c | 22 ++++++++++----------- arch/powerpc/kernel/iommu.c | 14 +++++++------- arch/powerpc/platforms/ps3/system-bus.c | 33 ++++++++++++++++++----------= ---- arch/powerpc/platforms/pseries/ibmebus.c | 15 ++++++++------- arch/powerpc/platforms/pseries/vio.c | 21 +++++++++++--------- 6 files changed, 60 insertions(+), 53 deletions(-) diff --git a/arch/powerpc/include/asm/iommu.h b/arch/powerpc/include/asm/io= mmu.h index b410021ad4c6..eafdd63cd6c4 100644 --- a/arch/powerpc/include/asm/iommu.h +++ b/arch/powerpc/include/asm/iommu.h @@ -274,12 +274,12 @@ extern void *iommu_alloc_coherent(struct device *dev,= struct iommu_table *tbl, unsigned long mask, gfp_t flag, int node); extern void iommu_free_coherent(struct iommu_table *tbl, size_t size, void *vaddr, dma_addr_t dma_handle); -extern dma_addr_t iommu_map_page(struct device *dev, struct iommu_table *t= bl, - struct page *page, unsigned long offset, - size_t size, unsigned long mask, +extern dma_addr_t iommu_map_phys(struct device *dev, struct iommu_table *t= bl, + phys_addr_t phys, size_t size, + unsigned long mask, enum dma_data_direction direction, unsigned long attrs); -extern void iommu_unmap_page(struct iommu_table *tbl, dma_addr_t dma_handl= e, +extern void iommu_unmap_phys(struct iommu_table *tbl, dma_addr_t dma_handl= e, size_t size, enum dma_data_direction direction, unsigned long attrs); =20 diff --git a/arch/powerpc/kernel/dma-iommu.c b/arch/powerpc/kernel/dma-iomm= u.c index 0359ab72cd3b..aa3689d61917 100644 --- a/arch/powerpc/kernel/dma-iommu.c +++ b/arch/powerpc/kernel/dma-iommu.c @@ -93,28 +93,26 @@ static void dma_iommu_free_coherent(struct device *dev,= size_t size, =20 /* Creates TCEs for a user provided buffer. The user buffer must be * contiguous real kernel storage (not vmalloc). The address passed here - * comprises a page address and offset into that page. The dma_addr_t - * returned will point to the same byte within the page as was passed in. + * is a physical address to that page. The dma_addr_t returned will point + * to the same byte within the page as was passed in. */ -static dma_addr_t dma_iommu_map_page(struct device *dev, struct page *page, - unsigned long offset, size_t size, +static dma_addr_t dma_iommu_map_phys(struct device *dev, phys_addr_t phys, + size_t size, enum dma_data_direction direction, unsigned long attrs) { - return iommu_map_page(dev, get_iommu_table_base(dev), page, offset, - size, dma_get_mask(dev), direction, attrs); + return iommu_map_phys(dev, get_iommu_table_base(dev), phys, size, + dma_get_mask(dev), direction, attrs); } =20 - -static void dma_iommu_unmap_page(struct device *dev, dma_addr_t dma_handle, +static void dma_iommu_unmap_phys(struct device *dev, dma_addr_t dma_handle, size_t size, enum dma_data_direction direction, unsigned long attrs) { - iommu_unmap_page(get_iommu_table_base(dev), dma_handle, size, direction, + iommu_unmap_phys(get_iommu_table_base(dev), dma_handle, size, direction, attrs); } =20 - static int dma_iommu_map_sg(struct device *dev, struct scatterlist *sglist, int nelems, enum dma_data_direction direction, unsigned long attrs) @@ -211,8 +209,8 @@ const struct dma_map_ops dma_iommu_ops =3D { .map_sg =3D dma_iommu_map_sg, .unmap_sg =3D dma_iommu_unmap_sg, .dma_supported =3D dma_iommu_dma_supported, - .map_page =3D dma_iommu_map_page, - .unmap_page =3D dma_iommu_unmap_page, + .map_phys =3D dma_iommu_map_phys, + .unmap_phys =3D dma_iommu_unmap_phys, .get_required_mask =3D dma_iommu_get_required_mask, .mmap =3D dma_common_mmap, .get_sgtable =3D dma_common_get_sgtable, diff --git a/arch/powerpc/kernel/iommu.c b/arch/powerpc/kernel/iommu.c index 244eb4857e7f..6b5f4b72ce97 100644 --- a/arch/powerpc/kernel/iommu.c +++ b/arch/powerpc/kernel/iommu.c @@ -848,12 +848,12 @@ EXPORT_SYMBOL_GPL(iommu_tce_table_put); =20 /* Creates TCEs for a user provided buffer. The user buffer must be * contiguous real kernel storage (not vmalloc). The address passed here - * comprises a page address and offset into that page. The dma_addr_t - * returned will point to the same byte within the page as was passed in. + * is physical address into that page. The dma_addr_t returned will point + * to the same byte within the page as was passed in. */ -dma_addr_t iommu_map_page(struct device *dev, struct iommu_table *tbl, - struct page *page, unsigned long offset, size_t size, - unsigned long mask, enum dma_data_direction direction, +dma_addr_t iommu_map_phys(struct device *dev, struct iommu_table *tbl, + phys_addr_t phys, size_t size, unsigned long mask, + enum dma_data_direction direction, unsigned long attrs) { dma_addr_t dma_handle =3D DMA_MAPPING_ERROR; @@ -863,7 +863,7 @@ dma_addr_t iommu_map_page(struct device *dev, struct io= mmu_table *tbl, =20 BUG_ON(direction =3D=3D DMA_NONE); =20 - vaddr =3D page_address(page) + offset; + vaddr =3D phys_to_virt(phys); uaddr =3D (unsigned long)vaddr; =20 if (tbl) { @@ -890,7 +890,7 @@ dma_addr_t iommu_map_page(struct device *dev, struct io= mmu_table *tbl, return dma_handle; } =20 -void iommu_unmap_page(struct iommu_table *tbl, dma_addr_t dma_handle, +void iommu_unmap_phys(struct iommu_table *tbl, dma_addr_t dma_handle, size_t size, enum dma_data_direction direction, unsigned long attrs) { diff --git a/arch/powerpc/platforms/ps3/system-bus.c b/arch/powerpc/platfor= ms/ps3/system-bus.c index afbaabf182d0..f4f3477d3a23 100644 --- a/arch/powerpc/platforms/ps3/system-bus.c +++ b/arch/powerpc/platforms/ps3/system-bus.c @@ -551,18 +551,20 @@ static void ps3_free_coherent(struct device *_dev, si= ze_t size, void *vaddr, =20 /* Creates TCEs for a user provided buffer. The user buffer must be * contiguous real kernel storage (not vmalloc). The address passed here - * comprises a page address and offset into that page. The dma_addr_t - * returned will point to the same byte within the page as was passed in. + * is physical address to that hat page. The dma_addr_t returned will point + * to the same byte within the page as was passed in. */ =20 -static dma_addr_t ps3_sb_map_page(struct device *_dev, struct page *page, - unsigned long offset, size_t size, enum dma_data_direction direction, - unsigned long attrs) +static dma_addr_t ps3_sb_map_phys(struct device *_dev, phys_addr_t phys, + size_t size, enum dma_data_direction direction, unsigned long attrs) { struct ps3_system_bus_device *dev =3D ps3_dev_to_system_bus_dev(_dev); int result; dma_addr_t bus_addr; - void *ptr =3D page_address(page) + offset; + void *ptr =3D phys_to_virt(phys); + + if (unlikely(attrs & DMA_ATTR_MMIO)) + return DMA_MAPPING_ERROR; =20 result =3D ps3_dma_map(dev->d_region, (unsigned long)ptr, size, &bus_addr, @@ -577,8 +579,8 @@ static dma_addr_t ps3_sb_map_page(struct device *_dev, = struct page *page, return bus_addr; } =20 -static dma_addr_t ps3_ioc0_map_page(struct device *_dev, struct page *page, - unsigned long offset, size_t size, +static dma_addr_t ps3_ioc0_map_phys(struct device *_dev, phys_addr_t phys, + size_t size, enum dma_data_direction direction, unsigned long attrs) { @@ -586,7 +588,10 @@ static dma_addr_t ps3_ioc0_map_page(struct device *_de= v, struct page *page, int result; dma_addr_t bus_addr; u64 iopte_flag; - void *ptr =3D page_address(page) + offset; + void *ptr =3D phys_to_virt(phys); + + if (unlikely(attrs & DMA_ATTR_MMIO)) + return DMA_MAPPING_ERROR; =20 iopte_flag =3D CBE_IOPTE_M; switch (direction) { @@ -613,7 +618,7 @@ static dma_addr_t ps3_ioc0_map_page(struct device *_dev= , struct page *page, return bus_addr; } =20 -static void ps3_unmap_page(struct device *_dev, dma_addr_t dma_addr, +static void ps3_unmap_phys(struct device *_dev, dma_addr_t dma_addr, size_t size, enum dma_data_direction direction, unsigned long attrs) { struct ps3_system_bus_device *dev =3D ps3_dev_to_system_bus_dev(_dev); @@ -690,8 +695,8 @@ static const struct dma_map_ops ps3_sb_dma_ops =3D { .map_sg =3D ps3_sb_map_sg, .unmap_sg =3D ps3_sb_unmap_sg, .dma_supported =3D ps3_dma_supported, - .map_page =3D ps3_sb_map_page, - .unmap_page =3D ps3_unmap_page, + .map_phys =3D ps3_sb_map_phys, + .unmap_phys =3D ps3_unmap_phys, .mmap =3D dma_common_mmap, .get_sgtable =3D dma_common_get_sgtable, .alloc_pages_op =3D dma_common_alloc_pages, @@ -704,8 +709,8 @@ static const struct dma_map_ops ps3_ioc0_dma_ops =3D { .map_sg =3D ps3_ioc0_map_sg, .unmap_sg =3D ps3_ioc0_unmap_sg, .dma_supported =3D ps3_dma_supported, - .map_page =3D ps3_ioc0_map_page, - .unmap_page =3D ps3_unmap_page, + .map_phys =3D ps3_ioc0_map_phys, + .unmap_phys =3D ps3_unmap_phys, .mmap =3D dma_common_mmap, .get_sgtable =3D dma_common_get_sgtable, .alloc_pages_op =3D dma_common_alloc_pages, diff --git a/arch/powerpc/platforms/pseries/ibmebus.c b/arch/powerpc/platfo= rms/pseries/ibmebus.c index 3436b0af795e..cad2deb7e70d 100644 --- a/arch/powerpc/platforms/pseries/ibmebus.c +++ b/arch/powerpc/platforms/pseries/ibmebus.c @@ -86,17 +86,18 @@ static void ibmebus_free_coherent(struct device *dev, kfree(vaddr); } =20 -static dma_addr_t ibmebus_map_page(struct device *dev, - struct page *page, - unsigned long offset, +static dma_addr_t ibmebus_map_phys(struct device *dev, phys_addr_t phys, size_t size, enum dma_data_direction direction, unsigned long attrs) { - return (dma_addr_t)(page_address(page) + offset); + if (attrs & DMA_ATTR_MMIO) + return DMA_MAPPING_ERROR; + + return (dma_addr_t)(phys_to_virt(phys)); } =20 -static void ibmebus_unmap_page(struct device *dev, +static void ibmebus_unmap_phys(struct device *dev, dma_addr_t dma_addr, size_t size, enum dma_data_direction direction, @@ -146,8 +147,8 @@ static const struct dma_map_ops ibmebus_dma_ops =3D { .unmap_sg =3D ibmebus_unmap_sg, .dma_supported =3D ibmebus_dma_supported, .get_required_mask =3D ibmebus_dma_get_required_mask, - .map_page =3D ibmebus_map_page, - .unmap_page =3D ibmebus_unmap_page, + .map_phys =3D ibmebus_map_phys, + .unmap_phys =3D ibmebus_unmap_phys, }; =20 static int ibmebus_match_path(struct device *dev, const void *data) diff --git a/arch/powerpc/platforms/pseries/vio.c b/arch/powerpc/platforms/= pseries/vio.c index ac1d2d2c9a88..18cffac5468f 100644 --- a/arch/powerpc/platforms/pseries/vio.c +++ b/arch/powerpc/platforms/pseries/vio.c @@ -512,18 +512,21 @@ static void vio_dma_iommu_free_coherent(struct device= *dev, size_t size, vio_cmo_dealloc(viodev, roundup(size, PAGE_SIZE)); } =20 -static dma_addr_t vio_dma_iommu_map_page(struct device *dev, struct page *= page, - unsigned long offset, size_t size, - enum dma_data_direction direction, - unsigned long attrs) +static dma_addr_t vio_dma_iommu_map_phys(struct device *dev, phys_addr_t p= hys, + size_t size, + enum dma_data_direction direction, + unsigned long attrs) { struct vio_dev *viodev =3D to_vio_dev(dev); struct iommu_table *tbl =3D get_iommu_table_base(dev); dma_addr_t ret =3D DMA_MAPPING_ERROR; =20 + if (unlikely(attrs & DMA_ATTR_MMIO)) + return ret; + if (vio_cmo_alloc(viodev, roundup(size, IOMMU_PAGE_SIZE(tbl)))) goto out_fail; - ret =3D iommu_map_page(dev, tbl, page, offset, size, dma_get_mask(dev), + ret =3D iommu_map_phys(dev, tbl, phys, size, dma_get_mask(dev), direction, attrs); if (unlikely(ret =3D=3D DMA_MAPPING_ERROR)) goto out_deallocate; @@ -536,7 +539,7 @@ static dma_addr_t vio_dma_iommu_map_page(struct device = *dev, struct page *page, return DMA_MAPPING_ERROR; } =20 -static void vio_dma_iommu_unmap_page(struct device *dev, dma_addr_t dma_ha= ndle, +static void vio_dma_iommu_unmap_phys(struct device *dev, dma_addr_t dma_ha= ndle, size_t size, enum dma_data_direction direction, unsigned long attrs) @@ -544,7 +547,7 @@ static void vio_dma_iommu_unmap_page(struct device *dev= , dma_addr_t dma_handle, struct vio_dev *viodev =3D to_vio_dev(dev); struct iommu_table *tbl =3D get_iommu_table_base(dev); =20 - iommu_unmap_page(tbl, dma_handle, size, direction, attrs); + iommu_unmap_phys(tbl, dma_handle, size, direction, attrs); vio_cmo_dealloc(viodev, roundup(size, IOMMU_PAGE_SIZE(tbl))); } =20 @@ -605,8 +608,8 @@ static const struct dma_map_ops vio_dma_mapping_ops =3D= { .free =3D vio_dma_iommu_free_coherent, .map_sg =3D vio_dma_iommu_map_sg, .unmap_sg =3D vio_dma_iommu_unmap_sg, - .map_page =3D vio_dma_iommu_map_page, - .unmap_page =3D vio_dma_iommu_unmap_page, + .map_phys =3D vio_dma_iommu_map_phys, + .unmap_phys =3D vio_dma_iommu_unmap_phys, .dma_supported =3D dma_iommu_dma_supported, .get_required_mask =3D dma_iommu_get_required_mask, .mmap =3D dma_common_mmap, --=20 2.51.0 From nobody Thu Oct 30 18:20:50 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=quarantine dis=none) header.from=kernel.org ARC-Seal: i=1; a=rsa-sha256; t=1760519786; cv=none; d=zohomail.com; s=zohoarc; b=UiMZxTdIyoCN4CFIvNRYeDE1rEFd76R9pR9BxaklfyygvgOuXEgCIAAieRRCbQ7P8AE0MocH/9H7n08vE9K1/n3aT2o3uSH+jTMgvpvmwo7FNKgs3c/dh99Y1bpAVLeAleJ312Cl5ZSu1r5QBZsT1lIfEkO8M65dBp636ocHANk= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1760519786; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=TJlRyQ1VJm/K2tuHDToeDl7+QCmOoL3L7U1K1RQjuyw=; b=UsxTdQTCDmilx+DtZqtUt/ME7Ep5sF86aeb0z98cOeWxXVmt290htsxJFF7/Kqn3vqf8abJqFid4ToytonBij8TLzBoThnUzUH2LFkvd/bW26kTVnXPwxgUhRQXdtto3L4tq04JApVmTTn/2+//93z7y7F37ofxwuiADCdVdC8I= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=quarantine dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1760519786362454.72281934552143; Wed, 15 Oct 2025 02:16:26 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.1143463.1477200 (Exim 4.92) (envelope-from ) id 1v8xcD-0007fL-Fg; Wed, 15 Oct 2025 09:16:13 +0000 Received: by outflank-mailman (output) from mailman id 1143463.1477200; Wed, 15 Oct 2025 09:16:13 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1v8xcD-0007fA-CP; Wed, 15 Oct 2025 09:16:13 +0000 Received: by outflank-mailman (input) for mailman id 1143463; Wed, 15 Oct 2025 09:16:12 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1v8xZr-0002lL-Mn for xen-devel@lists.xenproject.org; Wed, 15 Oct 2025 09:13:47 +0000 Received: from tor.source.kernel.org (tor.source.kernel.org [2600:3c04:e001:324:0:1991:8:25]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 401703d9-a9a7-11f0-980a-7dc792cee155; Wed, 15 Oct 2025 11:13:45 +0200 (CEST) Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by tor.source.kernel.org (Postfix) with ESMTP id 86F3C63BF7; Wed, 15 Oct 2025 09:13:44 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 833DCC4CEFE; Wed, 15 Oct 2025 09:13:43 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 401703d9-a9a7-11f0-980a-7dc792cee155 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1760519624; bh=VJMEonbngsbL6IpijrzIYHbjAMBroRqSUu3lz2uAAqo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=KczTkmrb6bYU8Tg9j49OYe6YFKsv5pKXc/7hhZGsvcfW/wQgfV3qMC+lUeGrd2hUc 5zpPw/ZMjilS2KzneKF6/BHue7NxPjjRokSrqt0egj3uQim/5oLmDLiV18WAxxuXRH AwmUbgF2eCa2HLDg30cM/Zq+hYSEmp9m66qrcC2r5aS2bBcWr+scUqtrtFkAz3ayhK jdrrQUkifKtfJkBj8o4umhVpXAysn34TBn3haaSXGmxtbq7dyLZthT2Arkgqqtnhit lpWDHniY0RhzMRDC1by7nUOQ3eiURectjDf0dLDCPYQvbXTMlwDVSffrfvAqOW7G12 S0FUxgeWE1dDA== From: Leon Romanovsky To: Marek Szyprowski , Robin Murphy , Russell King , Juergen Gross , Stefano Stabellini , Oleksandr Tyshchenko , Richard Henderson , Matt Turner , Thomas Bogendoerfer , "James E.J. Bottomley" , Helge Deller , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Geoff Levand , "David S. Miller" , Andreas Larsson , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" Cc: iommu@lists.linux.dev, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, xen-devel@lists.xenproject.org, linux-alpha@vger.kernel.org, linux-mips@vger.kernel.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, sparclinux@vger.kernel.org Subject: [PATCH v5 11/14] sparc: Use physical address DMA mapping Date: Wed, 15 Oct 2025 12:12:57 +0300 Message-ID: <20251015-remove-map-page-v5-11-3bbfe3a25cdf@kernel.org> X-Mailer: git-send-email 2.51.0 In-Reply-To: <20251015-remove-map-page-v5-0-3bbfe3a25cdf@kernel.org> References: <20251015-remove-map-page-v5-0-3bbfe3a25cdf@kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" X-Mailer: b4 0.15-dev Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @kernel.org) X-ZM-MESSAGEID: 1760519787552154100 From: Leon Romanovsky Convert sparc architecture DMA code to use .map_phys callback. Signed-off-by: Leon Romanovsky --- arch/sparc/kernel/iommu.c | 30 +++++++++++++++++----------- arch/sparc/kernel/pci_sun4v.c | 31 ++++++++++++++++++----------- arch/sparc/mm/io-unit.c | 38 ++++++++++++++++++----------------- arch/sparc/mm/iommu.c | 46 ++++++++++++++++++++++-----------------= ---- 4 files changed, 82 insertions(+), 63 deletions(-) diff --git a/arch/sparc/kernel/iommu.c b/arch/sparc/kernel/iommu.c index da0363692528..46ef88bc9c26 100644 --- a/arch/sparc/kernel/iommu.c +++ b/arch/sparc/kernel/iommu.c @@ -260,26 +260,35 @@ static void dma_4u_free_coherent(struct device *dev, = size_t size, free_pages((unsigned long)cpu, order); } =20 -static dma_addr_t dma_4u_map_page(struct device *dev, struct page *page, - unsigned long offset, size_t sz, - enum dma_data_direction direction, +static dma_addr_t dma_4u_map_phys(struct device *dev, phys_addr_t phys, + size_t sz, enum dma_data_direction direction, unsigned long attrs) { struct iommu *iommu; struct strbuf *strbuf; iopte_t *base; unsigned long flags, npages, oaddr; - unsigned long i, base_paddr, ctx; + unsigned long i, ctx; u32 bus_addr, ret; unsigned long iopte_protection; =20 + if (unlikely(attrs & DMA_ATTR_MMIO)) + /* + * This check is included because older versions of the code + * lacked MMIO path support, and my ability to test this path + * is limited. However, from a software technical standpoint, + * there is no restriction, as the following code operates + * solely on physical addresses. + */ + goto bad_no_ctx; + iommu =3D dev->archdata.iommu; strbuf =3D dev->archdata.stc; =20 if (unlikely(direction =3D=3D DMA_NONE)) goto bad_no_ctx; =20 - oaddr =3D (unsigned long)(page_address(page) + offset); + oaddr =3D (unsigned long)(phys_to_virt(phys)); npages =3D IO_PAGE_ALIGN(oaddr + sz) - (oaddr & IO_PAGE_MASK); npages >>=3D IO_PAGE_SHIFT; =20 @@ -296,7 +305,6 @@ static dma_addr_t dma_4u_map_page(struct device *dev, s= truct page *page, bus_addr =3D (iommu->tbl.table_map_base + ((base - iommu->page_table) << IO_PAGE_SHIFT)); ret =3D bus_addr | (oaddr & ~IO_PAGE_MASK); - base_paddr =3D __pa(oaddr & IO_PAGE_MASK); if (strbuf->strbuf_enabled) iopte_protection =3D IOPTE_STREAMING(ctx); else @@ -304,8 +312,8 @@ static dma_addr_t dma_4u_map_page(struct device *dev, s= truct page *page, if (direction !=3D DMA_TO_DEVICE) iopte_protection |=3D IOPTE_WRITE; =20 - for (i =3D 0; i < npages; i++, base++, base_paddr +=3D IO_PAGE_SIZE) - iopte_val(*base) =3D iopte_protection | base_paddr; + for (i =3D 0; i < npages; i++, base++, phys +=3D IO_PAGE_SIZE) + iopte_val(*base) =3D iopte_protection | phys; =20 return ret; =20 @@ -383,7 +391,7 @@ static void strbuf_flush(struct strbuf *strbuf, struct = iommu *iommu, vaddr, ctx, npages); } =20 -static void dma_4u_unmap_page(struct device *dev, dma_addr_t bus_addr, +static void dma_4u_unmap_phys(struct device *dev, dma_addr_t bus_addr, size_t sz, enum dma_data_direction direction, unsigned long attrs) { @@ -753,8 +761,8 @@ static int dma_4u_supported(struct device *dev, u64 dev= ice_mask) static const struct dma_map_ops sun4u_dma_ops =3D { .alloc =3D dma_4u_alloc_coherent, .free =3D dma_4u_free_coherent, - .map_page =3D dma_4u_map_page, - .unmap_page =3D dma_4u_unmap_page, + .map_phys =3D dma_4u_map_phys, + .unmap_phys =3D dma_4u_unmap_phys, .map_sg =3D dma_4u_map_sg, .unmap_sg =3D dma_4u_unmap_sg, .sync_single_for_cpu =3D dma_4u_sync_single_for_cpu, diff --git a/arch/sparc/kernel/pci_sun4v.c b/arch/sparc/kernel/pci_sun4v.c index b720b21ccfbd..791f0a76665f 100644 --- a/arch/sparc/kernel/pci_sun4v.c +++ b/arch/sparc/kernel/pci_sun4v.c @@ -352,9 +352,8 @@ static void dma_4v_free_coherent(struct device *dev, si= ze_t size, void *cpu, free_pages((unsigned long)cpu, order); } =20 -static dma_addr_t dma_4v_map_page(struct device *dev, struct page *page, - unsigned long offset, size_t sz, - enum dma_data_direction direction, +static dma_addr_t dma_4v_map_phys(struct device *dev, phys_addr_t phys, + size_t sz, enum dma_data_direction direction, unsigned long attrs) { struct iommu *iommu; @@ -362,18 +361,27 @@ static dma_addr_t dma_4v_map_page(struct device *dev,= struct page *page, struct iommu_map_table *tbl; u64 mask; unsigned long flags, npages, oaddr; - unsigned long i, base_paddr; - unsigned long prot; + unsigned long i, prot; dma_addr_t bus_addr, ret; long entry; =20 + if (unlikely(attrs & DMA_ATTR_MMIO)) + /* + * This check is included because older versions of the code + * lacked MMIO path support, and my ability to test this path + * is limited. However, from a software technical standpoint, + * there is no restriction, as the following code operates + * solely on physical addresses. + */ + goto bad; + iommu =3D dev->archdata.iommu; atu =3D iommu->atu; =20 if (unlikely(direction =3D=3D DMA_NONE)) goto bad; =20 - oaddr =3D (unsigned long)(page_address(page) + offset); + oaddr =3D (unsigned long)(phys_to_virt(phys)); npages =3D IO_PAGE_ALIGN(oaddr + sz) - (oaddr & IO_PAGE_MASK); npages >>=3D IO_PAGE_SHIFT; =20 @@ -391,7 +399,6 @@ static dma_addr_t dma_4v_map_page(struct device *dev, s= truct page *page, =20 bus_addr =3D (tbl->table_map_base + (entry << IO_PAGE_SHIFT)); ret =3D bus_addr | (oaddr & ~IO_PAGE_MASK); - base_paddr =3D __pa(oaddr & IO_PAGE_MASK); prot =3D HV_PCI_MAP_ATTR_READ; if (direction !=3D DMA_TO_DEVICE) prot |=3D HV_PCI_MAP_ATTR_WRITE; @@ -403,8 +410,8 @@ static dma_addr_t dma_4v_map_page(struct device *dev, s= truct page *page, =20 iommu_batch_start(dev, prot, entry); =20 - for (i =3D 0; i < npages; i++, base_paddr +=3D IO_PAGE_SIZE) { - long err =3D iommu_batch_add(base_paddr, mask); + for (i =3D 0; i < npages; i++, phys +=3D IO_PAGE_SIZE) { + long err =3D iommu_batch_add(phys, mask); if (unlikely(err < 0L)) goto iommu_map_fail; } @@ -426,7 +433,7 @@ static dma_addr_t dma_4v_map_page(struct device *dev, s= truct page *page, return DMA_MAPPING_ERROR; } =20 -static void dma_4v_unmap_page(struct device *dev, dma_addr_t bus_addr, +static void dma_4v_unmap_phys(struct device *dev, dma_addr_t bus_addr, size_t sz, enum dma_data_direction direction, unsigned long attrs) { @@ -686,8 +693,8 @@ static int dma_4v_supported(struct device *dev, u64 dev= ice_mask) static const struct dma_map_ops sun4v_dma_ops =3D { .alloc =3D dma_4v_alloc_coherent, .free =3D dma_4v_free_coherent, - .map_page =3D dma_4v_map_page, - .unmap_page =3D dma_4v_unmap_page, + .map_phys =3D dma_4v_map_phys, + .unmap_phys =3D dma_4v_unmap_phys, .map_sg =3D dma_4v_map_sg, .unmap_sg =3D dma_4v_unmap_sg, .dma_supported =3D dma_4v_supported, diff --git a/arch/sparc/mm/io-unit.c b/arch/sparc/mm/io-unit.c index d8376f61b4d0..d409cb450de4 100644 --- a/arch/sparc/mm/io-unit.c +++ b/arch/sparc/mm/io-unit.c @@ -94,13 +94,14 @@ static int __init iounit_init(void) subsys_initcall(iounit_init); =20 /* One has to hold iounit->lock to call this */ -static unsigned long iounit_get_area(struct iounit_struct *iounit, unsigne= d long vaddr, int size) +static dma_addr_t iounit_get_area(struct iounit_struct *iounit, + phys_addr_t phys, int size) { int i, j, k, npages; unsigned long rotor, scan, limit; iopte_t iopte; =20 - npages =3D ((vaddr & ~PAGE_MASK) + size + (PAGE_SIZE-1)) >> PAGE_S= HIFT; + npages =3D (offset_in_page(phys) + size + (PAGE_SIZE - 1)) >> PAGE_SHIFT; =20 /* A tiny bit of magic ingredience :) */ switch (npages) { @@ -109,7 +110,7 @@ static unsigned long iounit_get_area(struct iounit_stru= ct *iounit, unsigned long default: i =3D 0x0213; break; } =09 - IOD(("iounit_get_area(%08lx,%d[%d])=3D", vaddr, size, npages)); + IOD(("%s(%pa,%d[%d])=3D", __func__, &phys, size, npages)); =09 next: j =3D (i & 15); rotor =3D iounit->rotor[j - 1]; @@ -124,7 +125,8 @@ nexti: scan =3D find_next_zero_bit(iounit->bmap, limit,= scan); } i >>=3D 4; if (!(i & 15)) - panic("iounit_get_area: Couldn't find free iopte slots for (%08lx,%d)\n= ", vaddr, size); + panic("iounit_get_area: Couldn't find free iopte slots for (%pa,%d)\n", + &phys, size); goto next; } for (k =3D 1, scan++; k < npages; k++) @@ -132,30 +134,29 @@ nexti: scan =3D find_next_zero_bit(iounit->bmap, limi= t, scan); goto nexti; iounit->rotor[j - 1] =3D (scan < limit) ? scan : iounit->limit[j - 1]; scan -=3D npages; - iopte =3D MKIOPTE(__pa(vaddr & PAGE_MASK)); - vaddr =3D IOUNIT_DMA_BASE + (scan << PAGE_SHIFT) + (vaddr & ~PAGE_MASK); + iopte =3D MKIOPTE(phys & PAGE_MASK); + phys =3D IOUNIT_DMA_BASE + (scan << PAGE_SHIFT) + offset_in_page(phys); for (k =3D 0; k < npages; k++, iopte =3D __iopte(iopte_val(iopte) + 0x100= ), scan++) { set_bit(scan, iounit->bmap); sbus_writel(iopte_val(iopte), &iounit->page_table[scan]); } - IOD(("%08lx\n", vaddr)); - return vaddr; + IOD(("%pa\n", &phys)); + return phys; } =20 -static dma_addr_t iounit_map_page(struct device *dev, struct page *page, - unsigned long offset, size_t len, enum dma_data_direction dir, - unsigned long attrs) +static dma_addr_t iounit_map_phys(struct device *dev, phys_addr_t phys, + size_t len, enum dma_data_direction dir, unsigned long attrs) { - void *vaddr =3D page_address(page) + offset; struct iounit_struct *iounit =3D dev->archdata.iommu; - unsigned long ret, flags; + unsigned long flags; + dma_addr_t ret; =09 /* XXX So what is maxphys for us and how do drivers know it? */ if (!len || len > 256 * 1024) return DMA_MAPPING_ERROR; =20 spin_lock_irqsave(&iounit->lock, flags); - ret =3D iounit_get_area(iounit, (unsigned long)vaddr, len); + ret =3D iounit_get_area(iounit, phys, len); spin_unlock_irqrestore(&iounit->lock, flags); return ret; } @@ -171,14 +172,15 @@ static int iounit_map_sg(struct device *dev, struct s= catterlist *sgl, int nents, /* FIXME: Cache some resolved pages - often several sg entries are to the= same page */ spin_lock_irqsave(&iounit->lock, flags); for_each_sg(sgl, sg, nents, i) { - sg->dma_address =3D iounit_get_area(iounit, (unsigned long) sg_virt(sg),= sg->length); + sg->dma_address =3D + iounit_get_area(iounit, sg_phys(sg), sg->length); sg->dma_length =3D sg->length; } spin_unlock_irqrestore(&iounit->lock, flags); return nents; } =20 -static void iounit_unmap_page(struct device *dev, dma_addr_t vaddr, size_t= len, +static void iounit_unmap_phys(struct device *dev, dma_addr_t vaddr, size_t= len, enum dma_data_direction dir, unsigned long attrs) { struct iounit_struct *iounit =3D dev->archdata.iommu; @@ -279,8 +281,8 @@ static const struct dma_map_ops iounit_dma_ops =3D { .alloc =3D iounit_alloc, .free =3D iounit_free, #endif - .map_page =3D iounit_map_page, - .unmap_page =3D iounit_unmap_page, + .map_phys =3D iounit_map_phys, + .unmap_phys =3D iounit_unmap_phys, .map_sg =3D iounit_map_sg, .unmap_sg =3D iounit_unmap_sg, }; diff --git a/arch/sparc/mm/iommu.c b/arch/sparc/mm/iommu.c index 5a5080db800f..f48adf62724a 100644 --- a/arch/sparc/mm/iommu.c +++ b/arch/sparc/mm/iommu.c @@ -181,18 +181,20 @@ static void iommu_flush_iotlb(iopte_t *iopte, unsigne= d int niopte) } } =20 -static dma_addr_t __sbus_iommu_map_page(struct device *dev, struct page *p= age, - unsigned long offset, size_t len, bool per_page_flush) +static dma_addr_t __sbus_iommu_map_phys(struct device *dev, phys_addr_t pa= ddr, + size_t len, bool per_page_flush, unsigned long attrs) { struct iommu_struct *iommu =3D dev->archdata.iommu; - phys_addr_t paddr =3D page_to_phys(page) + offset; - unsigned long off =3D paddr & ~PAGE_MASK; + unsigned long off =3D offset_in_page(paddr); unsigned long npages =3D (off + len + PAGE_SIZE - 1) >> PAGE_SHIFT; unsigned long pfn =3D __phys_to_pfn(paddr); unsigned int busa, busa0; iopte_t *iopte, *iopte0; int ioptex, i; =20 + if (unlikely(attrs & DMA_ATTR_MMIO)) + return DMA_MAPPING_ERROR; + /* XXX So what is maxphys for us and how do drivers know it? */ if (!len || len > 256 * 1024) return DMA_MAPPING_ERROR; @@ -202,10 +204,10 @@ static dma_addr_t __sbus_iommu_map_page(struct device= *dev, struct page *page, * XXX Is this a good assumption? * XXX What if someone else unmaps it here and races us? */ - if (per_page_flush && !PageHighMem(page)) { + if (per_page_flush && !PhysHighMem(paddr)) { unsigned long vaddr, p; =20 - vaddr =3D (unsigned long)page_address(page) + offset; + vaddr =3D (unsigned long)phys_to_virt(paddr); for (p =3D vaddr & PAGE_MASK; p < vaddr + len; p +=3D PAGE_SIZE) flush_page_for_dma(p); } @@ -231,19 +233,19 @@ static dma_addr_t __sbus_iommu_map_page(struct device= *dev, struct page *page, return busa0 + off; } =20 -static dma_addr_t sbus_iommu_map_page_gflush(struct device *dev, - struct page *page, unsigned long offset, size_t len, - enum dma_data_direction dir, unsigned long attrs) +static dma_addr_t sbus_iommu_map_phys_gflush(struct device *dev, + phys_addr_t phys, size_t len, enum dma_data_direction dir, + unsigned long attrs) { flush_page_for_dma(0); - return __sbus_iommu_map_page(dev, page, offset, len, false); + return __sbus_iommu_map_phys(dev, phys, len, false, attrs); } =20 -static dma_addr_t sbus_iommu_map_page_pflush(struct device *dev, - struct page *page, unsigned long offset, size_t len, - enum dma_data_direction dir, unsigned long attrs) +static dma_addr_t sbus_iommu_map_phys_pflush(struct device *dev, + phys_addr_t phys, size_t len, enum dma_data_direction dir, + unsigned long attrs) { - return __sbus_iommu_map_page(dev, page, offset, len, true); + return __sbus_iommu_map_phys(dev, phys, len, true, attrs); } =20 static int __sbus_iommu_map_sg(struct device *dev, struct scatterlist *sgl, @@ -254,8 +256,8 @@ static int __sbus_iommu_map_sg(struct device *dev, stru= ct scatterlist *sgl, int j; =20 for_each_sg(sgl, sg, nents, j) { - sg->dma_address =3D__sbus_iommu_map_page(dev, sg_page(sg), - sg->offset, sg->length, per_page_flush); + sg->dma_address =3D __sbus_iommu_map_phys(dev, sg_phys(sg), + sg->length, per_page_flush, attrs); if (sg->dma_address =3D=3D DMA_MAPPING_ERROR) return -EIO; sg->dma_length =3D sg->length; @@ -277,7 +279,7 @@ static int sbus_iommu_map_sg_pflush(struct device *dev,= struct scatterlist *sgl, return __sbus_iommu_map_sg(dev, sgl, nents, dir, attrs, true); } =20 -static void sbus_iommu_unmap_page(struct device *dev, dma_addr_t dma_addr, +static void sbus_iommu_unmap_phys(struct device *dev, dma_addr_t dma_addr, size_t len, enum dma_data_direction dir, unsigned long attrs) { struct iommu_struct *iommu =3D dev->archdata.iommu; @@ -303,7 +305,7 @@ static void sbus_iommu_unmap_sg(struct device *dev, str= uct scatterlist *sgl, int i; =20 for_each_sg(sgl, sg, nents, i) { - sbus_iommu_unmap_page(dev, sg->dma_address, sg->length, dir, + sbus_iommu_unmap_phys(dev, sg->dma_address, sg->length, dir, attrs); sg->dma_address =3D 0x21212121; } @@ -426,8 +428,8 @@ static const struct dma_map_ops sbus_iommu_dma_gflush_o= ps =3D { .alloc =3D sbus_iommu_alloc, .free =3D sbus_iommu_free, #endif - .map_page =3D sbus_iommu_map_page_gflush, - .unmap_page =3D sbus_iommu_unmap_page, + .map_phys =3D sbus_iommu_map_phys_gflush, + .unmap_phys =3D sbus_iommu_unmap_phys, .map_sg =3D sbus_iommu_map_sg_gflush, .unmap_sg =3D sbus_iommu_unmap_sg, }; @@ -437,8 +439,8 @@ static const struct dma_map_ops sbus_iommu_dma_pflush_o= ps =3D { .alloc =3D sbus_iommu_alloc, .free =3D sbus_iommu_free, #endif - .map_page =3D sbus_iommu_map_page_pflush, - .unmap_page =3D sbus_iommu_unmap_page, + .map_phys =3D sbus_iommu_map_phys_pflush, + .unmap_phys =3D sbus_iommu_unmap_phys, .map_sg =3D sbus_iommu_map_sg_pflush, .unmap_sg =3D sbus_iommu_unmap_sg, }; --=20 2.51.0 From nobody Thu Oct 30 18:20:50 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=quarantine dis=none) header.from=kernel.org ARC-Seal: i=1; a=rsa-sha256; t=1760519781; cv=none; d=zohomail.com; s=zohoarc; b=RLdHpphxut1F/kRUZMQ9WFFxKsyxwxrCzX788B9Gph9uxq3kWuSSAJT0gP/St5TtMPgyLkDdi6/qBCjC+FLwXDRPEQG1OPTg0rpAlTLaEZ5tdm4KWhHwBaX7xQxW+5NViOtw2naQtxvcs9i1fJ48s3Gl3QXvw+EWcqXNo0QKJ7w= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1760519781; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=sre/v2PIJcECdOMDCEV3Iq4dNO+3sWkzLnrrmxn4z08=; b=PpdyJGOEmNP6y6Jed/OyGNic3NlHG8hFizIqwOKU9JrYv3/2fQZ78+Sqhagyj/HnKa92rK/o4xWW2E5ODUKxim0p6Gj2cWnAGJ+9EZYB1NM0A8kppbfXKL06nKLoY5vvOKWtoUEE/3BiJNAjsaVz4dCLiLBuw669LA0zZ/Z0Tl8= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=quarantine dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 17605197810221003.2151779956681; Wed, 15 Oct 2025 02:16:21 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.1143460.1477190 (Exim 4.92) (envelope-from ) id 1v8xcB-0007PT-7s; Wed, 15 Oct 2025 09:16:11 +0000 Received: by outflank-mailman (output) from mailman id 1143460.1477190; Wed, 15 Oct 2025 09:16:11 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1v8xcB-0007PM-4h; Wed, 15 Oct 2025 09:16:11 +0000 Received: by outflank-mailman (input) for mailman id 1143460; Wed, 15 Oct 2025 09:16:10 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1v8xZt-0002lR-Ha for xen-devel@lists.xenproject.org; Wed, 15 Oct 2025 09:13:49 +0000 Received: from tor.source.kernel.org (tor.source.kernel.org [172.105.4.254]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 4216bf38-a9a7-11f0-9d15-b5c5bf9af7f9; Wed, 15 Oct 2025 11:13:48 +0200 (CEST) Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by tor.source.kernel.org (Postfix) with ESMTP id E839962559; Wed, 15 Oct 2025 09:13:47 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id F25B0C4CEF8; Wed, 15 Oct 2025 09:13:46 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 4216bf38-a9a7-11f0-9d15-b5c5bf9af7f9 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1760519627; bh=zzQXTsnLOGdg76KODINeBFjFs4FSyl4KUo2bG+1GZ8U=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=j9yhFgPgwpVs4tFWOcDRtxYhD6QBmPU5x7MeRpM5LRrV4SFruJeJ2VKVX1hlNzzev RKyxdFpLxe4VxNNS1i19yVfZuvwnTr6N7nozN8QELaDHuFUcpL3uFe4KZ9kZ1Qo5T8 lVSnqYkJQXMvInE6HUFHqyIlw2YQz+CdLBulm6tyKPsu65Y3ehdRM9odQAFkbpL0un x1cqSGLjI8L3B3I3mxWUVA6cdGriTZNj1DPjR0stP3cP9PiCRwc932+lC3wrCrzzgD ghjh1wbCy4td07EhnhZgJCk7Oroyhryf2f0kU1EDufHOl+tbg2mWKnvNf9h0EXS+Ck 7KY0MmP16+Uvg== From: Leon Romanovsky To: Marek Szyprowski , Robin Murphy , Russell King , Juergen Gross , Stefano Stabellini , Oleksandr Tyshchenko , Richard Henderson , Matt Turner , Thomas Bogendoerfer , "James E.J. Bottomley" , Helge Deller , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Geoff Levand , "David S. Miller" , Andreas Larsson , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" Cc: iommu@lists.linux.dev, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, xen-devel@lists.xenproject.org, linux-alpha@vger.kernel.org, linux-mips@vger.kernel.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, sparclinux@vger.kernel.org, Jason Gunthorpe , Jason Gunthorpe Subject: [PATCH v5 12/14] x86: Use physical address for DMA mapping Date: Wed, 15 Oct 2025 12:12:58 +0300 Message-ID: <20251015-remove-map-page-v5-12-3bbfe3a25cdf@kernel.org> X-Mailer: git-send-email 2.51.0 In-Reply-To: <20251015-remove-map-page-v5-0-3bbfe3a25cdf@kernel.org> References: <20251015-remove-map-page-v5-0-3bbfe3a25cdf@kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" X-Mailer: b4 0.15-dev Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @kernel.org) X-ZM-MESSAGEID: 1760519783407154100 From: Leon Romanovsky Perform mechanical conversion from DMA .map_page to .map_phys. Reviewed-by: Jason Gunthorpe Signed-off-by: Leon Romanovsky --- arch/x86/kernel/amd_gart_64.c | 19 ++++++++++--------- 1 file changed, 10 insertions(+), 9 deletions(-) diff --git a/arch/x86/kernel/amd_gart_64.c b/arch/x86/kernel/amd_gart_64.c index 3485d419c2f5..93a06307d953 100644 --- a/arch/x86/kernel/amd_gart_64.c +++ b/arch/x86/kernel/amd_gart_64.c @@ -222,13 +222,14 @@ static dma_addr_t dma_map_area(struct device *dev, dm= a_addr_t phys_mem, } =20 /* Map a single area into the IOMMU */ -static dma_addr_t gart_map_page(struct device *dev, struct page *page, - unsigned long offset, size_t size, - enum dma_data_direction dir, +static dma_addr_t gart_map_phys(struct device *dev, phys_addr_t paddr, + size_t size, enum dma_data_direction dir, unsigned long attrs) { unsigned long bus; - phys_addr_t paddr =3D page_to_phys(page) + offset; + + if (unlikely(attrs & DMA_ATTR_MMIO)) + return DMA_MAPPING_ERROR; =20 if (!need_iommu(dev, paddr, size)) return paddr; @@ -242,7 +243,7 @@ static dma_addr_t gart_map_page(struct device *dev, str= uct page *page, /* * Free a DMA mapping. */ -static void gart_unmap_page(struct device *dev, dma_addr_t dma_addr, +static void gart_unmap_phys(struct device *dev, dma_addr_t dma_addr, size_t size, enum dma_data_direction dir, unsigned long attrs) { @@ -282,7 +283,7 @@ static void gart_unmap_sg(struct device *dev, struct sc= atterlist *sg, int nents, for_each_sg(sg, s, nents, i) { if (!s->dma_length || !s->length) break; - gart_unmap_page(dev, s->dma_address, s->dma_length, dir, 0); + gart_unmap_phys(dev, s->dma_address, s->dma_length, dir, 0); } } =20 @@ -487,7 +488,7 @@ static void gart_free_coherent(struct device *dev, size_t size, void *vaddr, dma_addr_t dma_addr, unsigned long attrs) { - gart_unmap_page(dev, dma_addr, size, DMA_BIDIRECTIONAL, 0); + gart_unmap_phys(dev, dma_addr, size, DMA_BIDIRECTIONAL, 0); dma_direct_free(dev, size, vaddr, dma_addr, attrs); } =20 @@ -668,8 +669,8 @@ static __init int init_amd_gatt(struct agp_kern_info *i= nfo) static const struct dma_map_ops gart_dma_ops =3D { .map_sg =3D gart_map_sg, .unmap_sg =3D gart_unmap_sg, - .map_page =3D gart_map_page, - .unmap_page =3D gart_unmap_page, + .map_phys =3D gart_map_phys, + .unmap_phys =3D gart_unmap_phys, .alloc =3D gart_alloc_coherent, .free =3D gart_free_coherent, .mmap =3D dma_common_mmap, --=20 2.51.0 From nobody Thu Oct 30 18:20:50 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=quarantine dis=none) header.from=kernel.org ARC-Seal: i=1; a=rsa-sha256; t=1760519778; cv=none; d=zohomail.com; s=zohoarc; b=eLYevhFfp3jSlBveZT9hsv02YM229bRRSXyiFpPuh5iYosiauUjfplAO3jrZyDhXjXS2VS7N8zkcCM2jElZpBbSpgaapWNw56op1azQmev5iFTTCykGO5QmaAVo69wXem/FO5RkRsJ4lPqYL/LnSg8OJHi9nGoMg2PY03PKb2Ug= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1760519778; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=KAq9xJzmOoXx3xcoKMi/etKK6dz4o3cn8Zx5XPgyK40=; b=DMmvPDn+KNURGr0y0TuIx2YklNE3ULyRlxdqLMSaftZgzO5CmXTgC8cVfyOnoYVhblUJfCh0v2dJuNWIjUDiGQfDx3TjGi6xKVr1lDdlNx9x0odqjdG/OR7qHDbDuO6D35VOycYhfvWZLQ//IOFkcGbuv61uV6rOUpi+eISwsmg= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=quarantine dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1760519778233199.98221432893217; Wed, 15 Oct 2025 02:16:18 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.1143447.1477179 (Exim 4.92) (envelope-from ) id 1v8xc6-0006zL-0F; Wed, 15 Oct 2025 09:16:06 +0000 Received: by outflank-mailman (output) from mailman id 1143447.1477179; Wed, 15 Oct 2025 09:16:05 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1v8xc5-0006zE-Tx; Wed, 15 Oct 2025 09:16:05 +0000 Received: by outflank-mailman (input) for mailman id 1143447; Wed, 15 Oct 2025 09:16:04 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1v8xZy-0002lL-3Z for xen-devel@lists.xenproject.org; Wed, 15 Oct 2025 09:13:54 +0000 Received: from tor.source.kernel.org (tor.source.kernel.org [2600:3c04:e001:324:0:1991:8:25]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 4428821d-a9a7-11f0-980a-7dc792cee155; Wed, 15 Oct 2025 11:13:52 +0200 (CEST) Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by tor.source.kernel.org (Postfix) with ESMTP id 61E8763BEB; Wed, 15 Oct 2025 09:13:51 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 68352C116B1; Wed, 15 Oct 2025 09:13:50 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 4428821d-a9a7-11f0-980a-7dc792cee155 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1760519631; bh=mdWMUlQx33yhQCgB9g+eGxCMPJQasErUwNks3U6bpsY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=rEwULnAPmCOdwy2UrV0khdEVG+QC189t+pWVTdsvqzsG8ClpvXbpHIclLUeiChJit Asjzi2wPxyCyNS+yOeqOuZbyh1kzi+RASrkeH6L9WKgqJ4Nep3XyQdWIfGef5UXtLl 4fNLbyDWUuVqDpN4V/R/QJ801iNpMtBJyEWeF8cCn3lbsNgTFYpOnqdjul2qpJ86io H6krcrHfE21Xptwb5fJAnvBqNCMlN327+oD4pwEgkzc1w47bMq0cyHHDbS9H+MSG3D R2WhQOt9JKWoeo1Q4OK3KDX5S1JjA3cA5tf/m6FzVx9EfGIR1d1Mk/IWVTz8mEMZz4 6XQHEQ8GGkWUw== From: Leon Romanovsky To: Marek Szyprowski , Robin Murphy , Russell King , Juergen Gross , Stefano Stabellini , Oleksandr Tyshchenko , Richard Henderson , Matt Turner , Thomas Bogendoerfer , "James E.J. Bottomley" , Helge Deller , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Geoff Levand , "David S. Miller" , Andreas Larsson , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" Cc: iommu@lists.linux.dev, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, xen-devel@lists.xenproject.org, linux-alpha@vger.kernel.org, linux-mips@vger.kernel.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, sparclinux@vger.kernel.org, Jason Gunthorpe , Jason Gunthorpe Subject: [PATCH v5 13/14] xen: swiotlb: Convert mapping routine to rely on physical address Date: Wed, 15 Oct 2025 12:12:59 +0300 Message-ID: <20251015-remove-map-page-v5-13-3bbfe3a25cdf@kernel.org> X-Mailer: git-send-email 2.51.0 In-Reply-To: <20251015-remove-map-page-v5-0-3bbfe3a25cdf@kernel.org> References: <20251015-remove-map-page-v5-0-3bbfe3a25cdf@kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" X-Mailer: b4 0.15-dev Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @kernel.org) X-ZM-MESSAGEID: 1760519779129158500 From: Leon Romanovsky Switch to .map_phys callback instead of .map_page. Reviewed-by: Jason Gunthorpe Signed-off-by: Leon Romanovsky --- drivers/xen/grant-dma-ops.c | 20 ++++++++++++-------- 1 file changed, 12 insertions(+), 8 deletions(-) diff --git a/drivers/xen/grant-dma-ops.c b/drivers/xen/grant-dma-ops.c index 29257d2639db..14077d23f2a1 100644 --- a/drivers/xen/grant-dma-ops.c +++ b/drivers/xen/grant-dma-ops.c @@ -163,18 +163,22 @@ static void xen_grant_dma_free_pages(struct device *d= ev, size_t size, xen_grant_dma_free(dev, size, page_to_virt(vaddr), dma_handle, 0); } =20 -static dma_addr_t xen_grant_dma_map_page(struct device *dev, struct page *= page, - unsigned long offset, size_t size, +static dma_addr_t xen_grant_dma_map_phys(struct device *dev, phys_addr_t p= hys, + size_t size, enum dma_data_direction dir, unsigned long attrs) { struct xen_grant_dma_data *data; + unsigned long offset =3D offset_in_page(phys); unsigned long dma_offset =3D xen_offset_in_page(offset), pfn_offset =3D XEN_PFN_DOWN(offset); unsigned int i, n_pages =3D XEN_PFN_UP(dma_offset + size); grant_ref_t grant; dma_addr_t dma_handle; =20 + if (unlikely(attrs & DMA_ATTR_MMIO)) + return DMA_MAPPING_ERROR; + if (WARN_ON(dir =3D=3D DMA_NONE)) return DMA_MAPPING_ERROR; =20 @@ -190,7 +194,7 @@ static dma_addr_t xen_grant_dma_map_page(struct device = *dev, struct page *page, =20 for (i =3D 0; i < n_pages; i++) { gnttab_grant_foreign_access_ref(grant + i, data->backend_domid, - pfn_to_gfn(page_to_xen_pfn(page) + i + pfn_offset), + pfn_to_gfn(page_to_xen_pfn(phys_to_page(phys)) + i + pfn_offset), dir =3D=3D DMA_TO_DEVICE); } =20 @@ -199,7 +203,7 @@ static dma_addr_t xen_grant_dma_map_page(struct device = *dev, struct page *page, return dma_handle; } =20 -static void xen_grant_dma_unmap_page(struct device *dev, dma_addr_t dma_ha= ndle, +static void xen_grant_dma_unmap_phys(struct device *dev, dma_addr_t dma_ha= ndle, size_t size, enum dma_data_direction dir, unsigned long attrs) { @@ -242,7 +246,7 @@ static void xen_grant_dma_unmap_sg(struct device *dev, = struct scatterlist *sg, return; =20 for_each_sg(sg, s, nents, i) - xen_grant_dma_unmap_page(dev, s->dma_address, sg_dma_len(s), dir, + xen_grant_dma_unmap_phys(dev, s->dma_address, sg_dma_len(s), dir, attrs); } =20 @@ -257,7 +261,7 @@ static int xen_grant_dma_map_sg(struct device *dev, str= uct scatterlist *sg, return -EINVAL; =20 for_each_sg(sg, s, nents, i) { - s->dma_address =3D xen_grant_dma_map_page(dev, sg_page(s), s->offset, + s->dma_address =3D xen_grant_dma_map_phys(dev, sg_phys(s), s->length, dir, attrs); if (s->dma_address =3D=3D DMA_MAPPING_ERROR) goto out; @@ -286,8 +290,8 @@ static const struct dma_map_ops xen_grant_dma_ops =3D { .free_pages =3D xen_grant_dma_free_pages, .mmap =3D dma_common_mmap, .get_sgtable =3D dma_common_get_sgtable, - .map_page =3D xen_grant_dma_map_page, - .unmap_page =3D xen_grant_dma_unmap_page, + .map_phys =3D xen_grant_dma_map_phys, + .unmap_phys =3D xen_grant_dma_unmap_phys, .map_sg =3D xen_grant_dma_map_sg, .unmap_sg =3D xen_grant_dma_unmap_sg, .dma_supported =3D xen_grant_dma_supported, --=20 2.51.0 From nobody Thu Oct 30 18:20:50 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=quarantine dis=none) header.from=kernel.org ARC-Seal: i=1; a=rsa-sha256; t=1760519789; cv=none; d=zohomail.com; s=zohoarc; b=fkhXj9c3oqbH9ngfZtRDNSlE+jjtydSQKWpPcy5OwmZK0DzOE5n4As4AmNvtBFplbRrHlB0Trm71hALFLf6fa/QNHakozFFHIlbrRoB3JLNV8YaDjzLfZ+vduSkdK7RAOezMxEi7FZft9n5UEgUdLDJZ8AUnkEoUXvDsAOd5ofs= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1760519789; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=V7ERPbLHu4sHtJtwvlLEMBzhOV5RdwLpU5jWMZfXG4o=; b=RFrNMncST2p+O+Did+Y2hqv9vpxuhYRdLiTsWo7kAihauk5CUH3KOC5ev7IU9jxDtEbrKf5F4MSxiJKdjjQsvt2HIqZ9M0GSu09iHfpoerh29Gi+vDMo/4F4IDR4nclPIhY/35SgRA/n4PQL+WXzSkGsD73BQiXhl4jnpLCQ3TQ= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=quarantine dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1760519789377390.0656211599886; Wed, 15 Oct 2025 02:16:29 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.1143467.1477210 (Exim 4.92) (envelope-from ) id 1v8xcJ-0008AJ-Uv; Wed, 15 Oct 2025 09:16:19 +0000 Received: by outflank-mailman (output) from mailman id 1143467.1477210; Wed, 15 Oct 2025 09:16:19 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1v8xcJ-0008A7-Q1; Wed, 15 Oct 2025 09:16:19 +0000 Received: by outflank-mailman (input) for mailman id 1143467; Wed, 15 Oct 2025 09:16:18 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1v8xa5-0002lL-Cu for xen-devel@lists.xenproject.org; Wed, 15 Oct 2025 09:14:01 +0000 Received: from sea.source.kernel.org (sea.source.kernel.org [2600:3c0a:e001:78e:0:1991:8:25]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 4847017d-a9a7-11f0-980a-7dc792cee155; Wed, 15 Oct 2025 11:13:59 +0200 (CEST) Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sea.source.kernel.org (Postfix) with ESMTP id 4A99C41ABA; Wed, 15 Oct 2025 09:13:58 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7BA70C116D0; Wed, 15 Oct 2025 09:13:57 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 4847017d-a9a7-11f0-980a-7dc792cee155 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1760519638; bh=jbBStLIP7sXhH5ASWp90/3Kel7nJYFQjjdng7+NCqnU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=cYni+pn0hntzXMjiHeAuS5C74xt5skwmhT36I3n+0lgrdCfhhRxdMz4DFq9bbtcE6 KyshYgEW2BEQMCguUhfamAui6SDGk97Pwkv6cy20V4cWR1TsHKQXW5pPWPZJB0bROU z4ZKgWbbq79xXqedNuYW1MC9dow455r9BBysavYn2gIaxdDnPgefyKpbQmttDiywRR XVHkKv/NSbuh2sEJWfTis1y15Zck5wSXJQZhSm+nGRhMqLrjGmqENY85tfi+Y0OYZP t5B2ZZLAkMXiTAZ3Qc6jG3RANx5TxKdOkQVsq+nUWiPqAyNSSsDxWgXnNXCMBoBXwP N8uyupiJgjgjg== From: Leon Romanovsky To: Marek Szyprowski , Robin Murphy , Russell King , Juergen Gross , Stefano Stabellini , Oleksandr Tyshchenko , Richard Henderson , Matt Turner , Thomas Bogendoerfer , "James E.J. Bottomley" , Helge Deller , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Geoff Levand , "David S. Miller" , Andreas Larsson , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" Cc: iommu@lists.linux.dev, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, xen-devel@lists.xenproject.org, linux-alpha@vger.kernel.org, linux-mips@vger.kernel.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, sparclinux@vger.kernel.org, Jason Gunthorpe , Jason Gunthorpe Subject: [PATCH v5 14/14] dma-mapping: remove unused map_page callback Date: Wed, 15 Oct 2025 12:13:00 +0300 Message-ID: <20251015-remove-map-page-v5-14-3bbfe3a25cdf@kernel.org> X-Mailer: git-send-email 2.51.0 In-Reply-To: <20251015-remove-map-page-v5-0-3bbfe3a25cdf@kernel.org> References: <20251015-remove-map-page-v5-0-3bbfe3a25cdf@kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" X-Mailer: b4 0.15-dev Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @kernel.org) X-ZM-MESSAGEID: 1760519791463154100 From: Leon Romanovsky After conversion of arch code to use physical address mapping, there are no users of .map_page() and .unmap_page() callbacks, so let's remove them. Reviewed-by: Jason Gunthorpe Signed-off-by: Leon Romanovsky --- include/linux/dma-map-ops.h | 7 ------- kernel/dma/mapping.c | 12 ------------ kernel/dma/ops_helpers.c | 8 +------- 3 files changed, 1 insertion(+), 26 deletions(-) diff --git a/include/linux/dma-map-ops.h b/include/linux/dma-map-ops.h index 2e98ecc313a3..4809204c674c 100644 --- a/include/linux/dma-map-ops.h +++ b/include/linux/dma-map-ops.h @@ -31,13 +31,6 @@ struct dma_map_ops { void *cpu_addr, dma_addr_t dma_addr, size_t size, unsigned long attrs); =20 - dma_addr_t (*map_page)(struct device *dev, struct page *page, - unsigned long offset, size_t size, - enum dma_data_direction dir, unsigned long attrs); - void (*unmap_page)(struct device *dev, dma_addr_t dma_handle, - size_t size, enum dma_data_direction dir, - unsigned long attrs); - dma_addr_t (*map_phys)(struct device *dev, phys_addr_t phys, size_t size, enum dma_data_direction dir, unsigned long attrs); diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c index 32a85bfdf873..37163eb49f9f 100644 --- a/kernel/dma/mapping.c +++ b/kernel/dma/mapping.c @@ -171,16 +171,6 @@ dma_addr_t dma_map_phys(struct device *dev, phys_addr_= t phys, size_t size, addr =3D iommu_dma_map_phys(dev, phys, size, dir, attrs); else if (ops->map_phys) addr =3D ops->map_phys(dev, phys, size, dir, attrs); - else if (!is_mmio && ops->map_page) { - struct page *page =3D phys_to_page(phys); - size_t offset =3D offset_in_page(phys); - - /* - * The dma_ops API contract for ops->map_page() requires - * kmappable memory. - */ - addr =3D ops->map_page(dev, page, offset, size, dir, attrs); - } =20 if (!is_mmio) kmsan_handle_dma(phys, size, dir); @@ -222,8 +212,6 @@ void dma_unmap_phys(struct device *dev, dma_addr_t addr= , size_t size, iommu_dma_unmap_phys(dev, addr, size, dir, attrs); else if (ops->unmap_phys) ops->unmap_phys(dev, addr, size, dir, attrs); - else - ops->unmap_page(dev, addr, size, dir, attrs); trace_dma_unmap_phys(dev, addr, size, dir, attrs); debug_dma_unmap_phys(dev, addr, size, dir); } diff --git a/kernel/dma/ops_helpers.c b/kernel/dma/ops_helpers.c index 1eccbdbc99c1..20caf9cabf69 100644 --- a/kernel/dma/ops_helpers.c +++ b/kernel/dma/ops_helpers.c @@ -76,11 +76,8 @@ struct page *dma_common_alloc_pages(struct device *dev, = size_t size, if (use_dma_iommu(dev)) *dma_handle =3D iommu_dma_map_phys(dev, phys, size, dir, DMA_ATTR_SKIP_CPU_SYNC); - else if (ops->map_phys) - *dma_handle =3D ops->map_phys(dev, phys, size, dir, - DMA_ATTR_SKIP_CPU_SYNC); else - *dma_handle =3D ops->map_page(dev, page, 0, size, dir, + *dma_handle =3D ops->map_phys(dev, phys, size, dir, DMA_ATTR_SKIP_CPU_SYNC); if (*dma_handle =3D=3D DMA_MAPPING_ERROR) { dma_free_contiguous(dev, page, size); @@ -102,8 +99,5 @@ void dma_common_free_pages(struct device *dev, size_t si= ze, struct page *page, else if (ops->unmap_phys) ops->unmap_phys(dev, dma_handle, size, dir, DMA_ATTR_SKIP_CPU_SYNC); - else if (ops->unmap_page) - ops->unmap_page(dev, dma_handle, size, dir, - DMA_ATTR_SKIP_CPU_SYNC); dma_free_contiguous(dev, page, size); } --=20 2.51.0