From nobody Thu Oct 30 22:49:44 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=quarantine dis=none) header.from=kernel.org ARC-Seal: i=1; a=rsa-sha256; t=1760519627; cv=none; d=zohomail.com; s=zohoarc; b=Q+gIOhH0kkw7UzLveALQJPFiVkhb/TfCZPa3p78uwA0PxoCm/RsZU14jNvqY6LL3LMKt2rLAPIEmubIqd474/w2bLK3OjLyJdcCckIEmSxueioXs2CXMoPFN7c5RvZl/JmZQji0UMZ+/HBZbi/KbMuBPLge+E7bpN6Xqec7bJXY= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1760519627; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=hKwEkCXXPBhGWp9pQlBLVaD0gEElKNKMaLPB7yCkgDk=; b=m++jPLQDM8kAAwmSIBIGwXh/PR9y/PNeQxbJqjo4wOsoKyuoV5ATb8MO9O9czW548Z0ZIwT/+A7jn/MRRVychIIgu0nALQgV0m2prbpZMExhMW7zfwbF6tYPiY5ZjcMJj9urLWP5P9koqhISc0kGe92gYX43lZy3p7QDuMt1bak= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=quarantine dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1760519627542391.67871531308276; Wed, 15 Oct 2025 02:13:47 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.1143385.1477120 (Exim 4.92) (envelope-from ) id 1v8xZQ-0003Dd-9A; Wed, 15 Oct 2025 09:13:20 +0000 Received: by outflank-mailman (output) from mailman id 1143385.1477120; Wed, 15 Oct 2025 09:13:20 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1v8xZQ-0003DW-5Q; Wed, 15 Oct 2025 09:13:20 +0000 Received: by outflank-mailman (input) for mailman id 1143385; Wed, 15 Oct 2025 09:13:18 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1v8xZO-0002lR-Kt for xen-devel@lists.xenproject.org; Wed, 15 Oct 2025 09:13:18 +0000 Received: from sea.source.kernel.org (sea.source.kernel.org [2600:3c0a:e001:78e:0:1991:8:25]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 2f2fed46-a9a7-11f0-9d15-b5c5bf9af7f9; Wed, 15 Oct 2025 11:13:17 +0200 (CEST) Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sea.source.kernel.org (Postfix) with ESMTP id CDBA04A2EB; Wed, 15 Oct 2025 09:13:15 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0A1D2C116B1; Wed, 15 Oct 2025 09:13:15 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 2f2fed46-a9a7-11f0-9d15-b5c5bf9af7f9 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1760519595; bh=5CRzqWoVfMbqVNKIxqeoLG2c9tvgUeIFTheShsK3Bpg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=GfGSWwPLGtufj1swQ+n7QEGSAHQU77k7sZmwzSh3gwKsGKxwUSyhBd87hWhNEvtrl buUMXV+6O8Xu+JxlDnDhIv9r0IAD1Z0RfPmMTvm1RhyHnAh54jr/kk/B9gEmZKEBfq dyxPWuZk9aaW/5ysGOs3Lh2Xkuvkf8W36g0+yC8l1lZWgZR6XZvww3OdFVbQSeWbZ3 DBSlpFZzMALBmjKmlRreacKzyXlQmKcW4asQus02qvoKR7vSSnohS8I3XGMqS/YvX5 rvbOkMGDjGPOblMmmEof5eGSRnxj8+iqK6wr/OfWoF9xmDM6qaQh/HQoZtd5IhimPH 5sCcW9rpImS9g== From: Leon Romanovsky To: Marek Szyprowski , Robin Murphy , Russell King , Juergen Gross , Stefano Stabellini , Oleksandr Tyshchenko , Richard Henderson , Matt Turner , Thomas Bogendoerfer , "James E.J. Bottomley" , Helge Deller , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Geoff Levand , "David S. Miller" , Andreas Larsson , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" Cc: iommu@lists.linux.dev, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, xen-devel@lists.xenproject.org, linux-alpha@vger.kernel.org, linux-mips@vger.kernel.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, sparclinux@vger.kernel.org, Jason Gunthorpe , Jason Gunthorpe Subject: [PATCH v5 03/14] ARM: dma-mapping: Reduce struct page exposure in arch_sync_dma*() Date: Wed, 15 Oct 2025 12:12:49 +0300 Message-ID: <20251015-remove-map-page-v5-3-3bbfe3a25cdf@kernel.org> X-Mailer: git-send-email 2.51.0 In-Reply-To: <20251015-remove-map-page-v5-0-3bbfe3a25cdf@kernel.org> References: <20251015-remove-map-page-v5-0-3bbfe3a25cdf@kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" X-Mailer: b4 0.15-dev Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @kernel.org) X-ZM-MESSAGEID: 1760519629569158500 From: Leon Romanovsky As a preparation to changing from .map_page to use .map_phys DMA callbacks, convert arch_sync_dma*() functions to use physical addresses instead of struct page. Reviewed-by: Jason Gunthorpe Signed-off-by: Leon Romanovsky --- arch/arm/mm/dma-mapping.c | 82 ++++++++++++++++++-------------------------= ---- 1 file changed, 31 insertions(+), 51 deletions(-) diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c index 08641a936394..b0310d6762d5 100644 --- a/arch/arm/mm/dma-mapping.c +++ b/arch/arm/mm/dma-mapping.c @@ -624,16 +624,14 @@ static void __arm_dma_free(struct device *dev, size_t= size, void *cpu_addr, kfree(buf); } =20 -static void dma_cache_maint_page(struct page *page, unsigned long offset, - size_t size, enum dma_data_direction dir, +static void dma_cache_maint_page(phys_addr_t phys, size_t size, + enum dma_data_direction dir, void (*op)(const void *, size_t, int)) { - unsigned long pfn; + unsigned long offset =3D offset_in_page(phys); + unsigned long pfn =3D __phys_to_pfn(phys); size_t left =3D size; =20 - pfn =3D page_to_pfn(page) + offset / PAGE_SIZE; - offset %=3D PAGE_SIZE; - /* * A single sg entry may refer to multiple physically contiguous * pages. But we still need to process highmem pages individually. @@ -644,17 +642,18 @@ static void dma_cache_maint_page(struct page *page, u= nsigned long offset, size_t len =3D left; void *vaddr; =20 - page =3D pfn_to_page(pfn); - - if (PageHighMem(page)) { + phys =3D __pfn_to_phys(pfn); + if (PhysHighMem(phys)) { if (len + offset > PAGE_SIZE) len =3D PAGE_SIZE - offset; =20 if (cache_is_vipt_nonaliasing()) { - vaddr =3D kmap_atomic(page); + vaddr =3D kmap_atomic_pfn(pfn); op(vaddr + offset, len, dir); kunmap_atomic(vaddr); } else { + struct page *page =3D phys_to_page(phys); + vaddr =3D kmap_high_get(page); if (vaddr) { op(vaddr + offset, len, dir); @@ -662,7 +661,8 @@ static void dma_cache_maint_page(struct page *page, uns= igned long offset, } } } else { - vaddr =3D page_address(page) + offset; + phys +=3D offset; + vaddr =3D phys_to_virt(phys); op(vaddr, len, dir); } offset =3D 0; @@ -676,14 +676,11 @@ static void dma_cache_maint_page(struct page *page, u= nsigned long offset, * Note: Drivers should NOT use this function directly. * Use the driver DMA support - see dma-mapping.h (dma_sync_*) */ -static void __dma_page_cpu_to_dev(struct page *page, unsigned long off, - size_t size, enum dma_data_direction dir) +void arch_sync_dma_for_device(phys_addr_t paddr, size_t size, + enum dma_data_direction dir) { - phys_addr_t paddr; - - dma_cache_maint_page(page, off, size, dir, dmac_map_area); + dma_cache_maint_page(paddr, size, dir, dmac_map_area); =20 - paddr =3D page_to_phys(page) + off; if (dir =3D=3D DMA_FROM_DEVICE) { outer_inv_range(paddr, paddr + size); } else { @@ -692,17 +689,15 @@ static void __dma_page_cpu_to_dev(struct page *page, = unsigned long off, /* FIXME: non-speculating: flush on bidirectional mappings? */ } =20 -static void __dma_page_dev_to_cpu(struct page *page, unsigned long off, - size_t size, enum dma_data_direction dir) +void arch_sync_dma_for_cpu(phys_addr_t paddr, size_t size, + enum dma_data_direction dir) { - phys_addr_t paddr =3D page_to_phys(page) + off; - /* FIXME: non-speculating: not required */ /* in any case, don't bother invalidating if DMA to device */ if (dir !=3D DMA_TO_DEVICE) { outer_inv_range(paddr, paddr + size); =20 - dma_cache_maint_page(page, off, size, dir, dmac_unmap_area); + dma_cache_maint_page(paddr, size, dir, dmac_unmap_area); } =20 /* @@ -1205,7 +1200,7 @@ static int __map_sg_chunk(struct device *dev, struct = scatterlist *sg, unsigned int len =3D PAGE_ALIGN(s->offset + s->length); =20 if (!dev->dma_coherent && !(attrs & DMA_ATTR_SKIP_CPU_SYNC)) - __dma_page_cpu_to_dev(sg_page(s), s->offset, s->length, dir); + arch_sync_dma_for_device(sg_phys(s), s->length, dir); =20 prot =3D __dma_info_to_prot(dir, attrs); =20 @@ -1307,8 +1302,7 @@ static void arm_iommu_unmap_sg(struct device *dev, __iommu_remove_mapping(dev, sg_dma_address(s), sg_dma_len(s)); if (!dev->dma_coherent && !(attrs & DMA_ATTR_SKIP_CPU_SYNC)) - __dma_page_dev_to_cpu(sg_page(s), s->offset, - s->length, dir); + arch_sync_dma_for_cpu(sg_phys(s), s->length, dir); } } =20 @@ -1330,7 +1324,7 @@ static void arm_iommu_sync_sg_for_cpu(struct device *= dev, return; =20 for_each_sg(sg, s, nents, i) - __dma_page_dev_to_cpu(sg_page(s), s->offset, s->length, dir); + arch_sync_dma_for_cpu(sg_phys(s), s->length, dir); =20 } =20 @@ -1352,7 +1346,7 @@ static void arm_iommu_sync_sg_for_device(struct devic= e *dev, return; =20 for_each_sg(sg, s, nents, i) - __dma_page_cpu_to_dev(sg_page(s), s->offset, s->length, dir); + arch_sync_dma_for_device(sg_phys(s), s->length, dir); } =20 /** @@ -1374,7 +1368,7 @@ static dma_addr_t arm_iommu_map_page(struct device *d= ev, struct page *page, int ret, prot, len =3D PAGE_ALIGN(size + offset); =20 if (!dev->dma_coherent && !(attrs & DMA_ATTR_SKIP_CPU_SYNC)) - __dma_page_cpu_to_dev(page, offset, size, dir); + arch_sync_dma_for_device(page_to_phys(page), offset, size, dir); =20 dma_addr =3D __alloc_iova(mapping, len); if (dma_addr =3D=3D DMA_MAPPING_ERROR) @@ -1407,7 +1401,6 @@ static void arm_iommu_unmap_page(struct device *dev, = dma_addr_t handle, { struct dma_iommu_mapping *mapping =3D to_dma_iommu_mapping(dev); dma_addr_t iova =3D handle & PAGE_MASK; - struct page *page; int offset =3D handle & ~PAGE_MASK; int len =3D PAGE_ALIGN(size + offset); =20 @@ -1415,8 +1408,9 @@ static void arm_iommu_unmap_page(struct device *dev, = dma_addr_t handle, return; =20 if (!dev->dma_coherent && !(attrs & DMA_ATTR_SKIP_CPU_SYNC)) { - page =3D phys_to_page(iommu_iova_to_phys(mapping->domain, iova)); - __dma_page_dev_to_cpu(page, offset, size, dir); + phys_addr_t phys =3D iommu_iova_to_phys(mapping->domain, iova); + + arch_sync_dma_for_cpu(phys + offset, size, dir); } =20 iommu_unmap(mapping->domain, iova, len); @@ -1485,14 +1479,14 @@ static void arm_iommu_sync_single_for_cpu(struct de= vice *dev, { struct dma_iommu_mapping *mapping =3D to_dma_iommu_mapping(dev); dma_addr_t iova =3D handle & PAGE_MASK; - struct page *page; unsigned int offset =3D handle & ~PAGE_MASK; + phys_addr_t phys; =20 if (dev->dma_coherent || !iova) return; =20 - page =3D phys_to_page(iommu_iova_to_phys(mapping->domain, iova)); - __dma_page_dev_to_cpu(page, offset, size, dir); + phys =3D iommu_iova_to_phys(mapping->domain, iova); + arch_sync_dma_for_cpu(phys + offset, size, dir); } =20 static void arm_iommu_sync_single_for_device(struct device *dev, @@ -1500,14 +1494,14 @@ static void arm_iommu_sync_single_for_device(struct= device *dev, { struct dma_iommu_mapping *mapping =3D to_dma_iommu_mapping(dev); dma_addr_t iova =3D handle & PAGE_MASK; - struct page *page; unsigned int offset =3D handle & ~PAGE_MASK; + phys_addr_t phys; =20 if (dev->dma_coherent || !iova) return; =20 - page =3D phys_to_page(iommu_iova_to_phys(mapping->domain, iova)); - __dma_page_cpu_to_dev(page, offset, size, dir); + phys =3D iommu_iova_to_phys(mapping->domain, iova); + arch_sync_dma_for_device(phys + offset, size, dir); } =20 static const struct dma_map_ops iommu_ops =3D { @@ -1794,20 +1788,6 @@ void arch_teardown_dma_ops(struct device *dev) set_dma_ops(dev, NULL); } =20 -void arch_sync_dma_for_device(phys_addr_t paddr, size_t size, - enum dma_data_direction dir) -{ - __dma_page_cpu_to_dev(phys_to_page(paddr), paddr & (PAGE_SIZE - 1), - size, dir); -} - -void arch_sync_dma_for_cpu(phys_addr_t paddr, size_t size, - enum dma_data_direction dir) -{ - __dma_page_dev_to_cpu(phys_to_page(paddr), paddr & (PAGE_SIZE - 1), - size, dir); -} - void *arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *dma_hand= le, gfp_t gfp, unsigned long attrs) { --=20 2.51.0