From nobody Wed Oct 1 22:32:41 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 357B522A4F1; Sun, 28 Sep 2025 15:02:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1759071776; cv=none; b=PmT7x7iVH3Lxzig26HdB+QlCth1NW+u0KXE9KIy5t28anN3hNdTrIPqK9GJe9ZKwUSzWvvOnfDBBi2M1fU4Y9Y9qrXuNHPD7RJTuLSVRIkkYe9IeqPoLfDcckLn+eikYBvuNIkwF6X2qJzPzBANwcmcuIMTtkYUCjGuK3h07RnE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1759071776; c=relaxed/simple; bh=jNtnwWwMV3plKAecUNIdrhRBaM4ED7uOfKosdK+4834=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=TVbs5MMzbsNVaFPs3tVVSVw83IWK9khMC3LXPd6kI67XST2byb85JQ/nhhStrZQGUAYcq5MveBcLQ/mY6Ahb33MJXAiLfvefBenn9pZgmDPj2ThfnAc+hlktQXKyG6TQBOVDFXl0HF2Dj1T6aZsKcySjVdE98AuKML6yda7M8IU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=q5iYL8J4; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="q5iYL8J4" Received: by smtp.kernel.org (Postfix) with ESMTPSA id E2C49C4CEF5; Sun, 28 Sep 2025 15:02:53 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1759071775; bh=jNtnwWwMV3plKAecUNIdrhRBaM4ED7uOfKosdK+4834=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=q5iYL8J4WbMGQv508j0QZDEfRJ0+7iu0lvJK3NuWtN1V3QOH6cmyo6Kk/sloMCjGJ 3opzab96iV6QBOWtv3sEcwe3ixvE9omNE1shesZU9PSudr5AtxFOSo+XfXeHrS3kRm N4PgnJV5qrM/PtIMgPQ1CS8vRmu6FNYaMEtf1Uln3Tgfwxf4/ZiydAtEBi+XdEIyaT tFEnF9SvQ2T9pHurTpXc/xL+PlNqDJ6+SkERg2O08QiR0KVTHkZeHs+oryBHUZdFXG lNYCWoQiMxLiaXDpAfuLbBo4vqWn76vCduW5o0tt6jpMIf/+MzpS5eMBgsF2hYeqvS WZi3vUeeyRjAw== From: Leon Romanovsky To: Marek Szyprowski Cc: Leon Romanovsky , Jason Gunthorpe , Andreas Larsson , Borislav Petkov , Dave Hansen , "David S. Miller" , Geoff Levand , Helge Deller , Ingo Molnar , iommu@lists.linux.dev, "James E.J. Bottomley" , Jason Wang , Juergen Gross , linux-alpha@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mips@vger.kernel.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, Madhavan Srinivasan , Matt Turner , Michael Ellerman , "Michael S. Tsirkin" , Richard Henderson , sparclinux@vger.kernel.org, Stefano Stabellini , Thomas Bogendoerfer , Thomas Gleixner , virtualization@lists.linux.dev, x86@kernel.org, xen-devel@lists.xenproject.org, Magnus Lindholm Subject: [PATCH v1 1/9] alpha: Convert mapping routine to rely on physical address Date: Sun, 28 Sep 2025 18:02:21 +0300 Message-ID: <512d4c498103fcfccd8c60ce1982cd961434d30b.1759071169.git.leon@kernel.org> X-Mailer: git-send-email 2.51.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Leon Romanovsky Alpha doesn't need struct *page and can perform mapping based on physical addresses. So convert it to implement new .map_phys callback. As part of this change, remove useless BUG_ON() as DMA mapping layer ensures that right direction is provided. Signed-off-by: Leon Romanovsky Tested-by: Magnus Lindholm --- arch/alpha/kernel/pci_iommu.c | 48 +++++++++++++++-------------------- 1 file changed, 21 insertions(+), 27 deletions(-) diff --git a/arch/alpha/kernel/pci_iommu.c b/arch/alpha/kernel/pci_iommu.c index dc91de50f906..3e4f631a1f27 100644 --- a/arch/alpha/kernel/pci_iommu.c +++ b/arch/alpha/kernel/pci_iommu.c @@ -224,28 +224,26 @@ static int pci_dac_dma_supported(struct pci_dev *dev,= u64 mask) until either pci_unmap_single or pci_dma_sync_single is performed. */ =20 static dma_addr_t -pci_map_single_1(struct pci_dev *pdev, void *cpu_addr, size_t size, +pci_map_single_1(struct pci_dev *pdev, phys_addr_t paddr, size_t size, int dac_allowed) { struct pci_controller *hose =3D pdev ? pdev->sysdata : pci_isa_hose; dma_addr_t max_dma =3D pdev ? pdev->dma_mask : ISA_DMA_MASK; + unsigned long offset =3D offset_in_page(paddr); struct pci_iommu_arena *arena; long npages, dma_ofs, i; - unsigned long paddr; dma_addr_t ret; unsigned int align =3D 0; struct device *dev =3D pdev ? &pdev->dev : NULL; =20 - paddr =3D __pa(cpu_addr); - #if !DEBUG_NODIRECT /* First check to see if we can use the direct map window. */ if (paddr + size + __direct_map_base - 1 <=3D max_dma && paddr + size <=3D __direct_map_size) { ret =3D paddr + __direct_map_base; =20 - DBGA2("pci_map_single: [%p,%zx] -> direct %llx from %ps\n", - cpu_addr, size, ret, __builtin_return_address(0)); + DBGA2("pci_map_single: [%pa,%zx] -> direct %llx from %ps\n", + &paddr, size, ret, __builtin_return_address(0)); =20 return ret; } @@ -255,8 +253,8 @@ pci_map_single_1(struct pci_dev *pdev, void *cpu_addr, = size_t size, if (dac_allowed) { ret =3D paddr + alpha_mv.pci_dac_offset; =20 - DBGA2("pci_map_single: [%p,%zx] -> DAC %llx from %ps\n", - cpu_addr, size, ret, __builtin_return_address(0)); + DBGA2("pci_map_single: [%pa,%zx] -> DAC %llx from %ps\n", + &paddr, size, ret, __builtin_return_address(0)); =20 return ret; } @@ -290,10 +288,10 @@ pci_map_single_1(struct pci_dev *pdev, void *cpu_addr= , size_t size, arena->ptes[i + dma_ofs] =3D mk_iommu_pte(paddr); =20 ret =3D arena->dma_base + dma_ofs * PAGE_SIZE; - ret +=3D (unsigned long)cpu_addr & ~PAGE_MASK; + ret +=3D offset; =20 - DBGA2("pci_map_single: [%p,%zx] np %ld -> sg %llx from %ps\n", - cpu_addr, size, npages, ret, __builtin_return_address(0)); + DBGA2("pci_map_single: [%pa,%zx] np %ld -> sg %llx from %ps\n", + &paddr, size, npages, ret, __builtin_return_address(0)); =20 return ret; } @@ -322,19 +320,18 @@ static struct pci_dev *alpha_gendev_to_pci(struct dev= ice *dev) return NULL; } =20 -static dma_addr_t alpha_pci_map_page(struct device *dev, struct page *page, - unsigned long offset, size_t size, - enum dma_data_direction dir, +static dma_addr_t alpha_pci_map_phys(struct device *dev, phys_addr_t phys, + size_t size, enum dma_data_direction dir, unsigned long attrs) { struct pci_dev *pdev =3D alpha_gendev_to_pci(dev); int dac_allowed; =20 - BUG_ON(dir =3D=3D DMA_NONE); + if (attrs & DMA_ATTR_MMIO) + return DMA_MAPPING_ERROR; =20 - dac_allowed =3D pdev ? pci_dac_dma_supported(pdev, pdev->dma_mask) : 0;=20 - return pci_map_single_1(pdev, (char *)page_address(page) + offset,=20 - size, dac_allowed); + dac_allowed =3D pdev ? pci_dac_dma_supported(pdev, pdev->dma_mask) : 0; + return pci_map_single_1(pdev, phys, size, dac_allowed); } =20 /* Unmap a single streaming mode DMA translation. The DMA_ADDR and @@ -343,7 +340,7 @@ static dma_addr_t alpha_pci_map_page(struct device *dev= , struct page *page, the cpu to the buffer are guaranteed to see whatever the device wrote there. */ =20 -static void alpha_pci_unmap_page(struct device *dev, dma_addr_t dma_addr, +static void alpha_pci_unmap_phys(struct device *dev, dma_addr_t dma_addr, size_t size, enum dma_data_direction dir, unsigned long attrs) { @@ -353,8 +350,6 @@ static void alpha_pci_unmap_page(struct device *dev, dm= a_addr_t dma_addr, struct pci_iommu_arena *arena; long dma_ofs, npages; =20 - BUG_ON(dir =3D=3D DMA_NONE); - if (dma_addr >=3D __direct_map_base && dma_addr < __direct_map_base + __direct_map_size) { /* Nothing to do. */ @@ -429,7 +424,7 @@ static void *alpha_pci_alloc_coherent(struct device *de= v, size_t size, } memset(cpu_addr, 0, size); =20 - *dma_addrp =3D pci_map_single_1(pdev, cpu_addr, size, 0); + *dma_addrp =3D pci_map_single_1(pdev, virt_to_phys(cpu_addr), size, 0); if (*dma_addrp =3D=3D DMA_MAPPING_ERROR) { free_pages((unsigned long)cpu_addr, order); if (alpha_mv.mv_pci_tbi || (gfp & GFP_DMA)) @@ -643,9 +638,8 @@ static int alpha_pci_map_sg(struct device *dev, struct = scatterlist *sg, /* Fast path single entry scatterlists. */ if (nents =3D=3D 1) { sg->dma_length =3D sg->length; - sg->dma_address - =3D pci_map_single_1(pdev, SG_ENT_VIRT_ADDRESS(sg), - sg->length, dac_allowed); + sg->dma_address =3D pci_map_single_1(pdev, sg_phys(sg), + sg->length, dac_allowed); if (sg->dma_address =3D=3D DMA_MAPPING_ERROR) return -EIO; return 1; @@ -917,8 +911,8 @@ iommu_unbind(struct pci_iommu_arena *arena, long pg_sta= rt, long pg_count) const struct dma_map_ops alpha_pci_ops =3D { .alloc =3D alpha_pci_alloc_coherent, .free =3D alpha_pci_free_coherent, - .map_page =3D alpha_pci_map_page, - .unmap_page =3D alpha_pci_unmap_page, + .map_phys =3D alpha_pci_map_phys, + .unmap_phys =3D alpha_pci_unmap_phys, .map_sg =3D alpha_pci_map_sg, .unmap_sg =3D alpha_pci_unmap_sg, .dma_supported =3D alpha_pci_supported, --=20 2.51.0 From nobody Wed Oct 1 22:32:41 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A917234BA59; Sun, 28 Sep 2025 15:02:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1759071763; cv=none; b=Yg4N5gtEgAC/VnTAvbeRvrMXy/dHIInrAwMEPFUg/xh5QpN/011o4ks0SWXDlS8HM2DN7n7gE8ggvv0e+gZkkDOuF87DS60p3+vH/zGSuYMPcOQThBtvSt9bEJjXcYI/1zTbzOd99FC8VUU3Skvothas4uMijPAgF7EKIpBu4/Q= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1759071763; c=relaxed/simple; bh=joV1RJXfMRWLzhpnIVMn+yDni/E+0b2XTKZNLoQGaO4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=oDJWZOA9w0ww3RphZWlgSNUPaiIvaK0PwXoHv7mL2G382GOR8KurQ9vVUxkXcx13RLuq1WQcGtUbHR1vRTrskXcuEzeJScYaKpfv9n2xRS0LlfqR9Xu6o6cjDkXo5g7lN2iCqMevaq976J8fbpTgxGjZrhstNpwMo3OSs5+oMzw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=DSOYpsgi; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="DSOYpsgi" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 81470C116B1; Sun, 28 Sep 2025 15:02:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1759071763; bh=joV1RJXfMRWLzhpnIVMn+yDni/E+0b2XTKZNLoQGaO4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=DSOYpsgikSOy8MZ87jPu/KYUDc+l4cdaniO8RKKeI+Ogx+vIbjw3PFXybhxCGiVkL Svbf9LCD/OLSu1+BhYWYUN2NsbU1qvbAJmkVnyM+ehBPWXa1P8Dfsr10VqVfh+CJ3/ fK0PydC9i55IUTpcrx/WEa8jZeqt059f7qwZBuoA4bLcGPLmEQkD0cguNeebMBEFKT 3RrvQyQn8dehTNH48u0yl6WLA+Tz2ziEbgsKR0IERUfTbFnsVdNo8hdbmtk8QqvVxt 7HSong2gEFWSxqgarqeXxYxR/Gye8uoIqVAF8qNmGQXTAkUC6S5UaNNMSjuWnuDC7u prcP+vXzgDPsQ== From: Leon Romanovsky To: Marek Szyprowski Cc: Leon Romanovsky , Jason Gunthorpe , Andreas Larsson , Borislav Petkov , Dave Hansen , "David S. Miller" , Geoff Levand , Helge Deller , Ingo Molnar , iommu@lists.linux.dev, "James E.J. Bottomley" , Jason Wang , Juergen Gross , linux-alpha@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mips@vger.kernel.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, Madhavan Srinivasan , Matt Turner , Michael Ellerman , "Michael S. Tsirkin" , Richard Henderson , sparclinux@vger.kernel.org, Stefano Stabellini , Thomas Bogendoerfer , Thomas Gleixner , virtualization@lists.linux.dev, x86@kernel.org, xen-devel@lists.xenproject.org, Magnus Lindholm Subject: [PATCH v1 2/9] MIPS/jazzdma: Provide physical address directly Date: Sun, 28 Sep 2025 18:02:22 +0300 Message-ID: X-Mailer: git-send-email 2.51.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Leon Romanovsky MIPS jazz uses physical addresses for mapping pages, so convert it to get them directly from DMA mapping routine. Signed-off-by: Leon Romanovsky --- arch/mips/jazz/jazzdma.c | 20 +++++++++++++------- 1 file changed, 13 insertions(+), 7 deletions(-) diff --git a/arch/mips/jazz/jazzdma.c b/arch/mips/jazz/jazzdma.c index c97b089b9902..45fe71aa454b 100644 --- a/arch/mips/jazz/jazzdma.c +++ b/arch/mips/jazz/jazzdma.c @@ -521,18 +521,24 @@ static void jazz_dma_free(struct device *dev, size_t = size, void *vaddr, __free_pages(virt_to_page(vaddr), get_order(size)); } =20 -static dma_addr_t jazz_dma_map_page(struct device *dev, struct page *page, - unsigned long offset, size_t size, enum dma_data_direction dir, - unsigned long attrs) +static dma_addr_t jazz_dma_map_phys(struct device *dev, phys_addr_t phys, + size_t size, enum dma_data_direction dir, unsigned long attrs) { - phys_addr_t phys =3D page_to_phys(page) + offset; + if (attrs & DMA_ATTR_MMIO) + /* + * This check is included because older versions of the code lacked + * MMIO path support, and my ability to test this path is limited. + * However, from a software technical standpoint, there is no restrictio= n, + * as the following code operates solely on physical addresses. + */ + return DMA_MAPPING_ERROR; =20 if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC)) arch_sync_dma_for_device(phys, size, dir); return vdma_alloc(phys, size); } =20 -static void jazz_dma_unmap_page(struct device *dev, dma_addr_t dma_addr, +static void jazz_dma_unmap_phys(struct device *dev, dma_addr_t dma_addr, size_t size, enum dma_data_direction dir, unsigned long attrs) { if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC)) @@ -607,8 +613,8 @@ static void jazz_dma_sync_sg_for_cpu(struct device *dev, const struct dma_map_ops jazz_dma_ops =3D { .alloc =3D jazz_dma_alloc, .free =3D jazz_dma_free, - .map_page =3D jazz_dma_map_page, - .unmap_page =3D jazz_dma_unmap_page, + .map_phys =3D jazz_dma_map_phys, + .unmap_phys =3D jazz_dma_unmap_phys, .map_sg =3D jazz_dma_map_sg, .unmap_sg =3D jazz_dma_unmap_sg, .sync_single_for_cpu =3D jazz_dma_sync_single_for_cpu, --=20 2.51.0 From nobody Wed Oct 1 22:32:41 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6A1FA34BA59; Sun, 28 Sep 2025 15:02:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1759071767; cv=none; b=R50iqpxElUESomeepk3+h+NhCksL9yL6oYnga+NxBdYxB8Lcx55eJZD+iJYQfFhEKt4lvc93kKMGKDBmhEWBvE77C08ui+tOdufVrQGmncDOnIJkkMOQOAJDKmgAlcj06NZjDgJkaFLkZtrRpMM89W1Ypp63m6mNBT9Ot2a4LJE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1759071767; c=relaxed/simple; bh=VCOCqFxGPwggSuBDZKXbtlILkoSR1dboXF2jBBkRDQw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=k0C7CcHVyNNNSteZOw+mFo46DOHo8LdBahUXZgq1D1zbjlivUv7NpnDlZdgkbtPsVkwFsCSieIuOieGrqFJ5syEWFVxdHt0SyK1fInTI5VGqQ9idQCt5uWLi6lVx/09RrGLSy73UdKeVjEr3r4iE4dPXq2lq2YCaT7ANQcQVxzg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=Vx025h1O; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="Vx025h1O" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3C568C4CEF0; Sun, 28 Sep 2025 15:02:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1759071766; bh=VCOCqFxGPwggSuBDZKXbtlILkoSR1dboXF2jBBkRDQw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Vx025h1O6Widlbf8ZdUWczJlRCdm8t2J75iGS9YkD0Km/lK/Gq6LI2zL+dSoACFz8 vd8RLE751IJj5O7tK5pBaCXWBNz8cqNICZqHk6TPLPMahAofsQXTSw49By6IgGes+Z leItKHWydjvkNTD6XXLN8xGmEYfoo0VczIt/YLiG1n94Z+IgStH5O2f5h8HfhDcTiQ RUlO6m75yEL2F46lzosTWw5xsjAXulo0Iaf2bVqvrg5Olbvhf2srEYIDBcV0ZT05BU qi0J4SOvF+E3fgzXxh/Ss2D+jzwLI3i2MlGFrRfKqWNcrkwmK0sjhhVDcGDWHvdeTw D3kMaxocGZcJw== From: Leon Romanovsky To: Marek Szyprowski Cc: Leon Romanovsky , Jason Gunthorpe , Andreas Larsson , Borislav Petkov , Dave Hansen , "David S. Miller" , Geoff Levand , Helge Deller , Ingo Molnar , iommu@lists.linux.dev, "James E.J. Bottomley" , Jason Wang , Juergen Gross , linux-alpha@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mips@vger.kernel.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, Madhavan Srinivasan , Matt Turner , Michael Ellerman , "Michael S. Tsirkin" , Richard Henderson , sparclinux@vger.kernel.org, Stefano Stabellini , Thomas Bogendoerfer , Thomas Gleixner , virtualization@lists.linux.dev, x86@kernel.org, xen-devel@lists.xenproject.org, Magnus Lindholm Subject: [PATCH v1 3/9] parisc: Convert DMA map_page to map_phys interface Date: Sun, 28 Sep 2025 18:02:23 +0300 Message-ID: <333ec4dabec16d3d913a93780bc6e7ddb5240fcf.1759071169.git.leon@kernel.org> X-Mailer: git-send-email 2.51.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Leon Romanovsky Perform mechanical conversion from .map_page to .map_phys callback. Signed-off-by: Leon Romanovsky --- drivers/parisc/ccio-dma.c | 25 +++++++++++++------------ drivers/parisc/sba_iommu.c | 23 ++++++++++++----------- 2 files changed, 25 insertions(+), 23 deletions(-) diff --git a/drivers/parisc/ccio-dma.c b/drivers/parisc/ccio-dma.c index feef537257d0..d45f3634f827 100644 --- a/drivers/parisc/ccio-dma.c +++ b/drivers/parisc/ccio-dma.c @@ -773,17 +773,18 @@ ccio_map_single(struct device *dev, void *addr, size_= t size, =20 =20 static dma_addr_t -ccio_map_page(struct device *dev, struct page *page, unsigned long offset, - size_t size, enum dma_data_direction direction, - unsigned long attrs) +ccio_map_phys(struct device *dev, phys_addr_t phys, size_t size, + enum dma_data_direction direction, unsigned long attrs) { - return ccio_map_single(dev, page_address(page) + offset, size, - direction); + if (attrs & DMA_ATTR_MMIO) + return DMA_MAPPING_ERROR; + + return ccio_map_single(dev, phys_to_virt(phys), size, direction); } =20 =20 /** - * ccio_unmap_page - Unmap an address range from the IOMMU. + * ccio_unmap_phys - Unmap an address range from the IOMMU. * @dev: The PCI device. * @iova: The start address of the DMA region. * @size: The length of the DMA region. @@ -791,7 +792,7 @@ ccio_map_page(struct device *dev, struct page *page, un= signed long offset, * @attrs: attributes */ static void=20 -ccio_unmap_page(struct device *dev, dma_addr_t iova, size_t size, +ccio_unmap_phys(struct device *dev, dma_addr_t iova, size_t size, enum dma_data_direction direction, unsigned long attrs) { struct ioc *ioc; @@ -873,7 +874,7 @@ static void ccio_free(struct device *dev, size_t size, void *cpu_addr, dma_addr_t dma_handle, unsigned long attrs) { - ccio_unmap_page(dev, dma_handle, size, 0, 0); + ccio_unmap_phys(dev, dma_handle, size, 0, 0); free_pages((unsigned long)cpu_addr, get_order(size)); } =20 @@ -1004,7 +1005,7 @@ ccio_unmap_sg(struct device *dev, struct scatterlist = *sglist, int nents, #ifdef CCIO_COLLECT_STATS ioc->usg_pages +=3D sg_dma_len(sglist) >> PAGE_SHIFT; #endif - ccio_unmap_page(dev, sg_dma_address(sglist), + ccio_unmap_phys(dev, sg_dma_address(sglist), sg_dma_len(sglist), direction, 0); ++sglist; nents--; @@ -1017,8 +1018,8 @@ static const struct dma_map_ops ccio_ops =3D { .dma_supported =3D ccio_dma_supported, .alloc =3D ccio_alloc, .free =3D ccio_free, - .map_page =3D ccio_map_page, - .unmap_page =3D ccio_unmap_page, + .map_phys =3D ccio_map_phys, + .unmap_phys =3D ccio_unmap_phys, .map_sg =3D ccio_map_sg, .unmap_sg =3D ccio_unmap_sg, .get_sgtable =3D dma_common_get_sgtable, @@ -1072,7 +1073,7 @@ static int ccio_proc_info(struct seq_file *m, void *p) ioc->msingle_calls, ioc->msingle_pages, (int)((ioc->msingle_pages * 1000)/ioc->msingle_calls)); =20 - /* KLUGE - unmap_sg calls unmap_page for each mapped page */ + /* KLUGE - unmap_sg calls unmap_phys for each mapped page */ min =3D ioc->usingle_calls - ioc->usg_calls; max =3D ioc->usingle_pages - ioc->usg_pages; seq_printf(m, "pci_unmap_single: %8ld calls %8ld pages (avg %d/1000)\n", diff --git a/drivers/parisc/sba_iommu.c b/drivers/parisc/sba_iommu.c index fc3863c09f83..8040aa4e6ff4 100644 --- a/drivers/parisc/sba_iommu.c +++ b/drivers/parisc/sba_iommu.c @@ -778,17 +778,18 @@ sba_map_single(struct device *dev, void *addr, size_t= size, =20 =20 static dma_addr_t -sba_map_page(struct device *dev, struct page *page, unsigned long offset, - size_t size, enum dma_data_direction direction, - unsigned long attrs) +sba_map_phys(struct device *dev, phys_addr_t phys, size_t size, + enum dma_data_direction direction, unsigned long attrs) { - return sba_map_single(dev, page_address(page) + offset, size, - direction); + if (attrs & DMA_ATTR_MMIO) + return DMA_MAPPING_ERROR; + + return sba_map_single(dev, phys_to_virt(phys), size, direction); } =20 =20 /** - * sba_unmap_page - unmap one IOVA and free resources + * sba_unmap_phys - unmap one IOVA and free resources * @dev: instance of PCI owned by the driver that's asking. * @iova: IOVA of driver buffer previously mapped. * @size: number of bytes mapped in driver buffer. @@ -798,7 +799,7 @@ sba_map_page(struct device *dev, struct page *page, uns= igned long offset, * See Documentation/core-api/dma-api-howto.rst */ static void -sba_unmap_page(struct device *dev, dma_addr_t iova, size_t size, +sba_unmap_phys(struct device *dev, dma_addr_t iova, size_t size, enum dma_data_direction direction, unsigned long attrs) { struct ioc *ioc; @@ -914,7 +915,7 @@ static void sba_free(struct device *hwdev, size_t size, void *vaddr, dma_addr_t dma_handle, unsigned long attrs) { - sba_unmap_page(hwdev, dma_handle, size, 0, 0); + sba_unmap_phys(hwdev, dma_handle, size, 0, 0); free_pages((unsigned long) vaddr, get_order(size)); } =20 @@ -1061,7 +1062,7 @@ sba_unmap_sg(struct device *dev, struct scatterlist *= sglist, int nents, =20 while (nents && sg_dma_len(sglist)) { =20 - sba_unmap_page(dev, sg_dma_address(sglist), sg_dma_len(sglist), + sba_unmap_phys(dev, sg_dma_address(sglist), sg_dma_len(sglist), direction, 0); #ifdef SBA_COLLECT_STATS ioc->usg_pages +=3D ((sg_dma_address(sglist) & ~IOVP_MASK) + sg_dma_len(= sglist) + IOVP_SIZE - 1) >> PAGE_SHIFT; @@ -1085,8 +1086,8 @@ static const struct dma_map_ops sba_ops =3D { .dma_supported =3D sba_dma_supported, .alloc =3D sba_alloc, .free =3D sba_free, - .map_page =3D sba_map_page, - .unmap_page =3D sba_unmap_page, + .map_phys =3D sba_map_phys, + .unmap_phys =3D sba_unmap_phys, .map_sg =3D sba_map_sg, .unmap_sg =3D sba_unmap_sg, .get_sgtable =3D dma_common_get_sgtable, --=20 2.51.0 From nobody Wed Oct 1 22:32:41 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9D72023643E; Sun, 28 Sep 2025 15:02:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1759071770; cv=none; b=JPd7GEgEGcThHnuOCqcqMG8hn1tdDOWAbdQIZ2EnpWsu8e0Pdi9d9wPXX36Xo9719kR8ysOl5r2etTRqKX/NzWmq9YgtedXu//pkEWwltOXism/PnwF2wIJKEzffTKEPsJonsYqiFInth8ddGUG6CI7Gcit23UwnLiZ23B1TfBE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1759071770; c=relaxed/simple; bh=If22SjLGOh/is+v52/ZDiDRnMnCqTEHc+dpMRM0pA2s=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=XQklvkuumj1gpq4H7xjIgyJZrjWeRRm5TMdlKWgBHO0t7IdjTPWTSOwFjtQky0y4advvlVs8IJIpEOhgWShdz3whQO7MSIIiVdyW8xLpR9iDVYrKQ89i0dTiyZJaEwgs2JygIFCo5iwYSLNt1N9BeiayTTjZSgzTM2JL8ydyf8Q= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=LAocKRbJ; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="LAocKRbJ" Received: by smtp.kernel.org (Postfix) with ESMTPSA id B9EB9C4CEF0; Sun, 28 Sep 2025 15:02:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1759071770; bh=If22SjLGOh/is+v52/ZDiDRnMnCqTEHc+dpMRM0pA2s=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=LAocKRbJbaa97FIPPcaZ9C2Al8lU+DHQdZiPjJKGYtG5Bpk11+lwm3Tqi7A4+qTaw iQ70Bpw0TW3NRd41XhsfDsE99PRxJaEzj9U3/HQLJVS+/xQcYXRfViOEcnd+g8Lr8g xOXUh40eUvXDb1DNt5t6GB30z/eXqiin6CTW6vSxQDDhRFw9GYswOSz6RGlwZRIAQC 2ctL1b0csjMfGi3YVKGtGCAn4+8dWOC/oik0SDvI7Gz9EvUZrIGJFbqcfsbEV7ZjeK UKzgbUcboFDm5Iu7CjnXI7xmOHIS+z+B5/YSjpEbRi93B78wqFfgKWaSblWkn37+ti lU2bP2uCwCkzw== From: Leon Romanovsky To: Marek Szyprowski Cc: Leon Romanovsky , Jason Gunthorpe , Andreas Larsson , Borislav Petkov , Dave Hansen , "David S. Miller" , Geoff Levand , Helge Deller , Ingo Molnar , iommu@lists.linux.dev, "James E.J. Bottomley" , Jason Wang , Juergen Gross , linux-alpha@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mips@vger.kernel.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, Madhavan Srinivasan , Matt Turner , Michael Ellerman , "Michael S. Tsirkin" , Richard Henderson , sparclinux@vger.kernel.org, Stefano Stabellini , Thomas Bogendoerfer , Thomas Gleixner , virtualization@lists.linux.dev, x86@kernel.org, xen-devel@lists.xenproject.org, Magnus Lindholm Subject: [PATCH v1 4/9] powerpc: Convert to physical address DMA mapping Date: Sun, 28 Sep 2025 18:02:24 +0300 Message-ID: X-Mailer: git-send-email 2.51.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Leon Romanovsky Adapt PowerPC DMA to use physical addresses in order to prepare code to removal .map_page and .unmap_page. Signed-off-by: Leon Romanovsky --- arch/powerpc/include/asm/iommu.h | 8 +++--- arch/powerpc/kernel/dma-iommu.c | 22 +++++++--------- arch/powerpc/kernel/iommu.c | 14 +++++----- arch/powerpc/platforms/ps3/system-bus.c | 33 ++++++++++++++---------- arch/powerpc/platforms/pseries/ibmebus.c | 15 ++++++----- arch/powerpc/platforms/pseries/vio.c | 21 ++++++++------- 6 files changed, 60 insertions(+), 53 deletions(-) diff --git a/arch/powerpc/include/asm/iommu.h b/arch/powerpc/include/asm/io= mmu.h index b410021ad4c6..eafdd63cd6c4 100644 --- a/arch/powerpc/include/asm/iommu.h +++ b/arch/powerpc/include/asm/iommu.h @@ -274,12 +274,12 @@ extern void *iommu_alloc_coherent(struct device *dev,= struct iommu_table *tbl, unsigned long mask, gfp_t flag, int node); extern void iommu_free_coherent(struct iommu_table *tbl, size_t size, void *vaddr, dma_addr_t dma_handle); -extern dma_addr_t iommu_map_page(struct device *dev, struct iommu_table *t= bl, - struct page *page, unsigned long offset, - size_t size, unsigned long mask, +extern dma_addr_t iommu_map_phys(struct device *dev, struct iommu_table *t= bl, + phys_addr_t phys, size_t size, + unsigned long mask, enum dma_data_direction direction, unsigned long attrs); -extern void iommu_unmap_page(struct iommu_table *tbl, dma_addr_t dma_handl= e, +extern void iommu_unmap_phys(struct iommu_table *tbl, dma_addr_t dma_handl= e, size_t size, enum dma_data_direction direction, unsigned long attrs); =20 diff --git a/arch/powerpc/kernel/dma-iommu.c b/arch/powerpc/kernel/dma-iomm= u.c index 0359ab72cd3b..aa3689d61917 100644 --- a/arch/powerpc/kernel/dma-iommu.c +++ b/arch/powerpc/kernel/dma-iommu.c @@ -93,28 +93,26 @@ static void dma_iommu_free_coherent(struct device *dev,= size_t size, =20 /* Creates TCEs for a user provided buffer. The user buffer must be * contiguous real kernel storage (not vmalloc). The address passed here - * comprises a page address and offset into that page. The dma_addr_t - * returned will point to the same byte within the page as was passed in. + * is a physical address to that page. The dma_addr_t returned will point + * to the same byte within the page as was passed in. */ -static dma_addr_t dma_iommu_map_page(struct device *dev, struct page *page, - unsigned long offset, size_t size, +static dma_addr_t dma_iommu_map_phys(struct device *dev, phys_addr_t phys, + size_t size, enum dma_data_direction direction, unsigned long attrs) { - return iommu_map_page(dev, get_iommu_table_base(dev), page, offset, - size, dma_get_mask(dev), direction, attrs); + return iommu_map_phys(dev, get_iommu_table_base(dev), phys, size, + dma_get_mask(dev), direction, attrs); } =20 - -static void dma_iommu_unmap_page(struct device *dev, dma_addr_t dma_handle, +static void dma_iommu_unmap_phys(struct device *dev, dma_addr_t dma_handle, size_t size, enum dma_data_direction direction, unsigned long attrs) { - iommu_unmap_page(get_iommu_table_base(dev), dma_handle, size, direction, + iommu_unmap_phys(get_iommu_table_base(dev), dma_handle, size, direction, attrs); } =20 - static int dma_iommu_map_sg(struct device *dev, struct scatterlist *sglist, int nelems, enum dma_data_direction direction, unsigned long attrs) @@ -211,8 +209,8 @@ const struct dma_map_ops dma_iommu_ops =3D { .map_sg =3D dma_iommu_map_sg, .unmap_sg =3D dma_iommu_unmap_sg, .dma_supported =3D dma_iommu_dma_supported, - .map_page =3D dma_iommu_map_page, - .unmap_page =3D dma_iommu_unmap_page, + .map_phys =3D dma_iommu_map_phys, + .unmap_phys =3D dma_iommu_unmap_phys, .get_required_mask =3D dma_iommu_get_required_mask, .mmap =3D dma_common_mmap, .get_sgtable =3D dma_common_get_sgtable, diff --git a/arch/powerpc/kernel/iommu.c b/arch/powerpc/kernel/iommu.c index 244eb4857e7f..6b5f4b72ce97 100644 --- a/arch/powerpc/kernel/iommu.c +++ b/arch/powerpc/kernel/iommu.c @@ -848,12 +848,12 @@ EXPORT_SYMBOL_GPL(iommu_tce_table_put); =20 /* Creates TCEs for a user provided buffer. The user buffer must be * contiguous real kernel storage (not vmalloc). The address passed here - * comprises a page address and offset into that page. The dma_addr_t - * returned will point to the same byte within the page as was passed in. + * is physical address into that page. The dma_addr_t returned will point + * to the same byte within the page as was passed in. */ -dma_addr_t iommu_map_page(struct device *dev, struct iommu_table *tbl, - struct page *page, unsigned long offset, size_t size, - unsigned long mask, enum dma_data_direction direction, +dma_addr_t iommu_map_phys(struct device *dev, struct iommu_table *tbl, + phys_addr_t phys, size_t size, unsigned long mask, + enum dma_data_direction direction, unsigned long attrs) { dma_addr_t dma_handle =3D DMA_MAPPING_ERROR; @@ -863,7 +863,7 @@ dma_addr_t iommu_map_page(struct device *dev, struct io= mmu_table *tbl, =20 BUG_ON(direction =3D=3D DMA_NONE); =20 - vaddr =3D page_address(page) + offset; + vaddr =3D phys_to_virt(phys); uaddr =3D (unsigned long)vaddr; =20 if (tbl) { @@ -890,7 +890,7 @@ dma_addr_t iommu_map_page(struct device *dev, struct io= mmu_table *tbl, return dma_handle; } =20 -void iommu_unmap_page(struct iommu_table *tbl, dma_addr_t dma_handle, +void iommu_unmap_phys(struct iommu_table *tbl, dma_addr_t dma_handle, size_t size, enum dma_data_direction direction, unsigned long attrs) { diff --git a/arch/powerpc/platforms/ps3/system-bus.c b/arch/powerpc/platfor= ms/ps3/system-bus.c index afbaabf182d0..a223ba777148 100644 --- a/arch/powerpc/platforms/ps3/system-bus.c +++ b/arch/powerpc/platforms/ps3/system-bus.c @@ -551,18 +551,20 @@ static void ps3_free_coherent(struct device *_dev, si= ze_t size, void *vaddr, =20 /* Creates TCEs for a user provided buffer. The user buffer must be * contiguous real kernel storage (not vmalloc). The address passed here - * comprises a page address and offset into that page. The dma_addr_t - * returned will point to the same byte within the page as was passed in. + * is physical address to that hat page. The dma_addr_t returned will point + * to the same byte within the page as was passed in. */ =20 -static dma_addr_t ps3_sb_map_page(struct device *_dev, struct page *page, - unsigned long offset, size_t size, enum dma_data_direction direction, - unsigned long attrs) +static dma_addr_t ps3_sb_map_phys(struct device *_dev, phys_addr_t phys, + size_t size, enum dma_data_direction direction, unsigned long attrs) { struct ps3_system_bus_device *dev =3D ps3_dev_to_system_bus_dev(_dev); int result; dma_addr_t bus_addr; - void *ptr =3D page_address(page) + offset; + void *ptr =3D phys_to_virt(phys); + + if (attrs & DMA_ATTR_MMIO) + return DMA_MAPPING_ERROR; =20 result =3D ps3_dma_map(dev->d_region, (unsigned long)ptr, size, &bus_addr, @@ -577,8 +579,8 @@ static dma_addr_t ps3_sb_map_page(struct device *_dev, = struct page *page, return bus_addr; } =20 -static dma_addr_t ps3_ioc0_map_page(struct device *_dev, struct page *page, - unsigned long offset, size_t size, +static dma_addr_t ps3_ioc0_map_phys(struct device *_dev, phys_addr_t phys, + size_t size, enum dma_data_direction direction, unsigned long attrs) { @@ -586,7 +588,10 @@ static dma_addr_t ps3_ioc0_map_page(struct device *_de= v, struct page *page, int result; dma_addr_t bus_addr; u64 iopte_flag; - void *ptr =3D page_address(page) + offset; + void *ptr =3D phys_to_virt(phys); + + if (attrs & DMA_ATTR_MMIO) + return DMA_MAPPING_ERROR; =20 iopte_flag =3D CBE_IOPTE_M; switch (direction) { @@ -613,7 +618,7 @@ static dma_addr_t ps3_ioc0_map_page(struct device *_dev= , struct page *page, return bus_addr; } =20 -static void ps3_unmap_page(struct device *_dev, dma_addr_t dma_addr, +static void ps3_unmap_phys(struct device *_dev, dma_addr_t dma_addr, size_t size, enum dma_data_direction direction, unsigned long attrs) { struct ps3_system_bus_device *dev =3D ps3_dev_to_system_bus_dev(_dev); @@ -690,8 +695,8 @@ static const struct dma_map_ops ps3_sb_dma_ops =3D { .map_sg =3D ps3_sb_map_sg, .unmap_sg =3D ps3_sb_unmap_sg, .dma_supported =3D ps3_dma_supported, - .map_page =3D ps3_sb_map_page, - .unmap_page =3D ps3_unmap_page, + .map_phys =3D ps3_sb_map_phys, + .unmap_phys =3D ps3_unmap_phys, .mmap =3D dma_common_mmap, .get_sgtable =3D dma_common_get_sgtable, .alloc_pages_op =3D dma_common_alloc_pages, @@ -704,8 +709,8 @@ static const struct dma_map_ops ps3_ioc0_dma_ops =3D { .map_sg =3D ps3_ioc0_map_sg, .unmap_sg =3D ps3_ioc0_unmap_sg, .dma_supported =3D ps3_dma_supported, - .map_page =3D ps3_ioc0_map_page, - .unmap_page =3D ps3_unmap_page, + .map_phys =3D ps3_ioc0_map_phys, + .unmap_phys =3D ps3_unmap_phys, .mmap =3D dma_common_mmap, .get_sgtable =3D dma_common_get_sgtable, .alloc_pages_op =3D dma_common_alloc_pages, diff --git a/arch/powerpc/platforms/pseries/ibmebus.c b/arch/powerpc/platfo= rms/pseries/ibmebus.c index 3436b0af795e..cad2deb7e70d 100644 --- a/arch/powerpc/platforms/pseries/ibmebus.c +++ b/arch/powerpc/platforms/pseries/ibmebus.c @@ -86,17 +86,18 @@ static void ibmebus_free_coherent(struct device *dev, kfree(vaddr); } =20 -static dma_addr_t ibmebus_map_page(struct device *dev, - struct page *page, - unsigned long offset, +static dma_addr_t ibmebus_map_phys(struct device *dev, phys_addr_t phys, size_t size, enum dma_data_direction direction, unsigned long attrs) { - return (dma_addr_t)(page_address(page) + offset); + if (attrs & DMA_ATTR_MMIO) + return DMA_MAPPING_ERROR; + + return (dma_addr_t)(phys_to_virt(phys)); } =20 -static void ibmebus_unmap_page(struct device *dev, +static void ibmebus_unmap_phys(struct device *dev, dma_addr_t dma_addr, size_t size, enum dma_data_direction direction, @@ -146,8 +147,8 @@ static const struct dma_map_ops ibmebus_dma_ops =3D { .unmap_sg =3D ibmebus_unmap_sg, .dma_supported =3D ibmebus_dma_supported, .get_required_mask =3D ibmebus_dma_get_required_mask, - .map_page =3D ibmebus_map_page, - .unmap_page =3D ibmebus_unmap_page, + .map_phys =3D ibmebus_map_phys, + .unmap_phys =3D ibmebus_unmap_phys, }; =20 static int ibmebus_match_path(struct device *dev, const void *data) diff --git a/arch/powerpc/platforms/pseries/vio.c b/arch/powerpc/platforms/= pseries/vio.c index ac1d2d2c9a88..838e29d47378 100644 --- a/arch/powerpc/platforms/pseries/vio.c +++ b/arch/powerpc/platforms/pseries/vio.c @@ -512,18 +512,21 @@ static void vio_dma_iommu_free_coherent(struct device= *dev, size_t size, vio_cmo_dealloc(viodev, roundup(size, PAGE_SIZE)); } =20 -static dma_addr_t vio_dma_iommu_map_page(struct device *dev, struct page *= page, - unsigned long offset, size_t size, - enum dma_data_direction direction, - unsigned long attrs) +static dma_addr_t vio_dma_iommu_map_phys(struct device *dev, phys_addr_t p= hys, + size_t size, + enum dma_data_direction direction, + unsigned long attrs) { struct vio_dev *viodev =3D to_vio_dev(dev); struct iommu_table *tbl =3D get_iommu_table_base(dev); dma_addr_t ret =3D DMA_MAPPING_ERROR; =20 + if (attrs & DMA_ATTR_MMIO) + return ret; + if (vio_cmo_alloc(viodev, roundup(size, IOMMU_PAGE_SIZE(tbl)))) goto out_fail; - ret =3D iommu_map_page(dev, tbl, page, offset, size, dma_get_mask(dev), + ret =3D iommu_map_phys(dev, tbl, phys, size, dma_get_mask(dev), direction, attrs); if (unlikely(ret =3D=3D DMA_MAPPING_ERROR)) goto out_deallocate; @@ -536,7 +539,7 @@ static dma_addr_t vio_dma_iommu_map_page(struct device = *dev, struct page *page, return DMA_MAPPING_ERROR; } =20 -static void vio_dma_iommu_unmap_page(struct device *dev, dma_addr_t dma_ha= ndle, +static void vio_dma_iommu_unmap_phys(struct device *dev, dma_addr_t dma_ha= ndle, size_t size, enum dma_data_direction direction, unsigned long attrs) @@ -544,7 +547,7 @@ static void vio_dma_iommu_unmap_page(struct device *dev= , dma_addr_t dma_handle, struct vio_dev *viodev =3D to_vio_dev(dev); struct iommu_table *tbl =3D get_iommu_table_base(dev); =20 - iommu_unmap_page(tbl, dma_handle, size, direction, attrs); + iommu_unmap_phys(tbl, dma_handle, size, direction, attrs); vio_cmo_dealloc(viodev, roundup(size, IOMMU_PAGE_SIZE(tbl))); } =20 @@ -605,8 +608,8 @@ static const struct dma_map_ops vio_dma_mapping_ops =3D= { .free =3D vio_dma_iommu_free_coherent, .map_sg =3D vio_dma_iommu_map_sg, .unmap_sg =3D vio_dma_iommu_unmap_sg, - .map_page =3D vio_dma_iommu_map_page, - .unmap_page =3D vio_dma_iommu_unmap_page, + .map_phys =3D vio_dma_iommu_map_phys, + .unmap_phys =3D vio_dma_iommu_unmap_phys, .dma_supported =3D dma_iommu_dma_supported, .get_required_mask =3D dma_iommu_get_required_mask, .mmap =3D dma_common_mmap, --=20 2.51.0 From nobody Wed Oct 1 22:32:41 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 50DF4231A21; Sun, 28 Sep 2025 15:03:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1759071791; cv=none; b=uBccOkOBge8yiYjOUXPtUXXrY2nuj8oW7pIJwxAl59P5ETrclwgOetJmlBM51W98iqKH16WcgSFUDDn3VGbxMstDYGe+kr5D4cu627jDgXSchFqVHbWGSkKtHFibjdCoUUsU/Z8vEYRCHIyrwguKn2VheobmON7UXNEL7yhyQNk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1759071791; c=relaxed/simple; bh=WDOUet2h6wKH1CUP8FV6nMSpLTCbpgYokkGnWTBO1ug=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=q5Ukc+T8PyGiTUpYvNHpSnhcGe+T8RVZK8+HnC6fMENjco2oI6H5wc6cYFKCwBqsbKjtw6EZBheGJejyJ8lD+Jxb7Ld9jiT6VX5HvVKBDa10iSJjt82nuF1XiosTKdwsPHBCM/b0RT/iuoX7v1pLujBHieHJ1jlR5eaHBz+3Sh8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=BZGDDUcP; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="BZGDDUcP" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 01657C4CEF0; Sun, 28 Sep 2025 15:03:10 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1759071790; bh=WDOUet2h6wKH1CUP8FV6nMSpLTCbpgYokkGnWTBO1ug=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=BZGDDUcPv5ei5QmxE/Cb/GBl9Is3jwf+abjfL/T6FlKKDgd7o0SAAJ3DrLeOOxbT2 o0ouvmXHTBwdK2d3X69VTKxsccD2N+rDKl2rOOIbofTG0j98oZmecLqLOsDrfi9I1H +jxiGxFnis3M3/StOQO14SMGRRjYxShVKHrZ+0ivygrgODGeAceIKhswn247oAoqzi oGq1ezgkaJlcv65W0NAA1z/YU0H+tvfjhFduVH/BnAlPtpVXNs+BneD71FIxKPdJ+6 6XlbC7ZHDc2DVm8WhA5XUVOgjfQFhKz/s+S4IObV34WfPpto4itSg4V0zn2We7HBNA +tD2qMpibdTNQ== From: Leon Romanovsky To: Marek Szyprowski Cc: Leon Romanovsky , Jason Gunthorpe , Andreas Larsson , Borislav Petkov , Dave Hansen , "David S. Miller" , Geoff Levand , Helge Deller , Ingo Molnar , iommu@lists.linux.dev, "James E.J. Bottomley" , Jason Wang , Juergen Gross , linux-alpha@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mips@vger.kernel.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, Madhavan Srinivasan , Matt Turner , Michael Ellerman , "Michael S. Tsirkin" , Richard Henderson , sparclinux@vger.kernel.org, Stefano Stabellini , Thomas Bogendoerfer , Thomas Gleixner , virtualization@lists.linux.dev, x86@kernel.org, xen-devel@lists.xenproject.org, Magnus Lindholm Subject: [PATCH v1 5/9] sparc64: Use physical address DMA mapping Date: Sun, 28 Sep 2025 18:02:25 +0300 Message-ID: X-Mailer: git-send-email 2.51.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Leon Romanovsky Convert sparc architecture DMA code to use .map_phys callback. Signed-off-by: Leon Romanovsky --- arch/sparc/kernel/iommu.c | 16 ++++++------ arch/sparc/kernel/pci_sun4v.c | 16 ++++++------ arch/sparc/mm/io-unit.c | 13 +++++----- arch/sparc/mm/iommu.c | 46 ++++++++++++++++++----------------- 4 files changed, 48 insertions(+), 43 deletions(-) diff --git a/arch/sparc/kernel/iommu.c b/arch/sparc/kernel/iommu.c index da0363692528..288301d2398a 100644 --- a/arch/sparc/kernel/iommu.c +++ b/arch/sparc/kernel/iommu.c @@ -260,9 +260,8 @@ static void dma_4u_free_coherent(struct device *dev, si= ze_t size, free_pages((unsigned long)cpu, order); } =20 -static dma_addr_t dma_4u_map_page(struct device *dev, struct page *page, - unsigned long offset, size_t sz, - enum dma_data_direction direction, +static dma_addr_t dma_4u_map_phys(struct device *dev, phys_addr_t phys, + size_t sz, enum dma_data_direction direction, unsigned long attrs) { struct iommu *iommu; @@ -273,13 +272,16 @@ static dma_addr_t dma_4u_map_page(struct device *dev,= struct page *page, u32 bus_addr, ret; unsigned long iopte_protection; =20 + if (attrs & DMA_ATTR_MMIO) + goto bad_no_ctx; + iommu =3D dev->archdata.iommu; strbuf =3D dev->archdata.stc; =20 if (unlikely(direction =3D=3D DMA_NONE)) goto bad_no_ctx; =20 - oaddr =3D (unsigned long)(page_address(page) + offset); + oaddr =3D (unsigned long)(phys_to_virt(phys)); npages =3D IO_PAGE_ALIGN(oaddr + sz) - (oaddr & IO_PAGE_MASK); npages >>=3D IO_PAGE_SHIFT; =20 @@ -383,7 +385,7 @@ static void strbuf_flush(struct strbuf *strbuf, struct = iommu *iommu, vaddr, ctx, npages); } =20 -static void dma_4u_unmap_page(struct device *dev, dma_addr_t bus_addr, +static void dma_4u_unmap_phys(struct device *dev, dma_addr_t bus_addr, size_t sz, enum dma_data_direction direction, unsigned long attrs) { @@ -753,8 +755,8 @@ static int dma_4u_supported(struct device *dev, u64 dev= ice_mask) static const struct dma_map_ops sun4u_dma_ops =3D { .alloc =3D dma_4u_alloc_coherent, .free =3D dma_4u_free_coherent, - .map_page =3D dma_4u_map_page, - .unmap_page =3D dma_4u_unmap_page, + .map_phys =3D dma_4u_map_phys, + .unmap_phys =3D dma_4u_unmap_phys, .map_sg =3D dma_4u_map_sg, .unmap_sg =3D dma_4u_unmap_sg, .sync_single_for_cpu =3D dma_4u_sync_single_for_cpu, diff --git a/arch/sparc/kernel/pci_sun4v.c b/arch/sparc/kernel/pci_sun4v.c index b720b21ccfbd..d9d2464a948c 100644 --- a/arch/sparc/kernel/pci_sun4v.c +++ b/arch/sparc/kernel/pci_sun4v.c @@ -352,9 +352,8 @@ static void dma_4v_free_coherent(struct device *dev, si= ze_t size, void *cpu, free_pages((unsigned long)cpu, order); } =20 -static dma_addr_t dma_4v_map_page(struct device *dev, struct page *page, - unsigned long offset, size_t sz, - enum dma_data_direction direction, +static dma_addr_t dma_4v_map_phys(struct device *dev, phys_addr_t phys, + size_t sz, enum dma_data_direction direction, unsigned long attrs) { struct iommu *iommu; @@ -367,13 +366,16 @@ static dma_addr_t dma_4v_map_page(struct device *dev,= struct page *page, dma_addr_t bus_addr, ret; long entry; =20 + if (attrs & DMA_ATTR_MMIO) + goto bad; + iommu =3D dev->archdata.iommu; atu =3D iommu->atu; =20 if (unlikely(direction =3D=3D DMA_NONE)) goto bad; =20 - oaddr =3D (unsigned long)(page_address(page) + offset); + oaddr =3D (unsigned long)(phys_to_virt(phys)); npages =3D IO_PAGE_ALIGN(oaddr + sz) - (oaddr & IO_PAGE_MASK); npages >>=3D IO_PAGE_SHIFT; =20 @@ -426,7 +428,7 @@ static dma_addr_t dma_4v_map_page(struct device *dev, s= truct page *page, return DMA_MAPPING_ERROR; } =20 -static void dma_4v_unmap_page(struct device *dev, dma_addr_t bus_addr, +static void dma_4v_unmap_phys(struct device *dev, dma_addr_t bus_addr, size_t sz, enum dma_data_direction direction, unsigned long attrs) { @@ -686,8 +688,8 @@ static int dma_4v_supported(struct device *dev, u64 dev= ice_mask) static const struct dma_map_ops sun4v_dma_ops =3D { .alloc =3D dma_4v_alloc_coherent, .free =3D dma_4v_free_coherent, - .map_page =3D dma_4v_map_page, - .unmap_page =3D dma_4v_unmap_page, + .map_phys =3D dma_4v_map_phys, + .unmap_phys =3D dma_4v_unmap_phys, .map_sg =3D dma_4v_map_sg, .unmap_sg =3D dma_4v_unmap_sg, .dma_supported =3D dma_4v_supported, diff --git a/arch/sparc/mm/io-unit.c b/arch/sparc/mm/io-unit.c index d8376f61b4d0..fab303cc3370 100644 --- a/arch/sparc/mm/io-unit.c +++ b/arch/sparc/mm/io-unit.c @@ -142,11 +142,10 @@ nexti: scan =3D find_next_zero_bit(iounit->bmap, limi= t, scan); return vaddr; } =20 -static dma_addr_t iounit_map_page(struct device *dev, struct page *page, - unsigned long offset, size_t len, enum dma_data_direction dir, - unsigned long attrs) +static dma_addr_t iounit_map_phys(struct device *dev, phys_addr_t phys, + size_t len, enum dma_data_direction dir, unsigned long attrs) { - void *vaddr =3D page_address(page) + offset; + void *vaddr =3D phys_to_virt(phys); struct iounit_struct *iounit =3D dev->archdata.iommu; unsigned long ret, flags; =09 @@ -178,7 +177,7 @@ static int iounit_map_sg(struct device *dev, struct sca= tterlist *sgl, int nents, return nents; } =20 -static void iounit_unmap_page(struct device *dev, dma_addr_t vaddr, size_t= len, +static void iounit_unmap_phys(struct device *dev, dma_addr_t vaddr, size_t= len, enum dma_data_direction dir, unsigned long attrs) { struct iounit_struct *iounit =3D dev->archdata.iommu; @@ -279,8 +278,8 @@ static const struct dma_map_ops iounit_dma_ops =3D { .alloc =3D iounit_alloc, .free =3D iounit_free, #endif - .map_page =3D iounit_map_page, - .unmap_page =3D iounit_unmap_page, + .map_phys =3D iounit_map_phys, + .unmap_phys =3D iounit_unmap_phys, .map_sg =3D iounit_map_sg, .unmap_sg =3D iounit_unmap_sg, }; diff --git a/arch/sparc/mm/iommu.c b/arch/sparc/mm/iommu.c index 5a5080db800f..dfcd981fa7ef 100644 --- a/arch/sparc/mm/iommu.c +++ b/arch/sparc/mm/iommu.c @@ -181,18 +181,20 @@ static void iommu_flush_iotlb(iopte_t *iopte, unsigne= d int niopte) } } =20 -static dma_addr_t __sbus_iommu_map_page(struct device *dev, struct page *p= age, - unsigned long offset, size_t len, bool per_page_flush) +static dma_addr_t __sbus_iommu_map_phys(struct device *dev, phys_addr_t pa= ddr, + size_t len, bool per_page_flush, unsigned long attrs) { struct iommu_struct *iommu =3D dev->archdata.iommu; - phys_addr_t paddr =3D page_to_phys(page) + offset; - unsigned long off =3D paddr & ~PAGE_MASK; + unsigned long off =3D offset_in_page(paddr); unsigned long npages =3D (off + len + PAGE_SIZE - 1) >> PAGE_SHIFT; unsigned long pfn =3D __phys_to_pfn(paddr); unsigned int busa, busa0; iopte_t *iopte, *iopte0; int ioptex, i; =20 + if (attrs & DMA_ATTR_MMIO) + return DMA_MAPPING_ERROR; + /* XXX So what is maxphys for us and how do drivers know it? */ if (!len || len > 256 * 1024) return DMA_MAPPING_ERROR; @@ -202,10 +204,10 @@ static dma_addr_t __sbus_iommu_map_page(struct device= *dev, struct page *page, * XXX Is this a good assumption? * XXX What if someone else unmaps it here and races us? */ - if (per_page_flush && !PageHighMem(page)) { + if (per_page_flush && !PhysHighMem(paddr)) { unsigned long vaddr, p; =20 - vaddr =3D (unsigned long)page_address(page) + offset; + vaddr =3D (unsigned long)phys_to_virt(paddr); for (p =3D vaddr & PAGE_MASK; p < vaddr + len; p +=3D PAGE_SIZE) flush_page_for_dma(p); } @@ -231,19 +233,19 @@ static dma_addr_t __sbus_iommu_map_page(struct device= *dev, struct page *page, return busa0 + off; } =20 -static dma_addr_t sbus_iommu_map_page_gflush(struct device *dev, - struct page *page, unsigned long offset, size_t len, - enum dma_data_direction dir, unsigned long attrs) +static dma_addr_t sbus_iommu_map_phys_gflush(struct device *dev, + phys_addr_t phys, size_t len, enum dma_data_direction dir, + unsigned long attrs) { flush_page_for_dma(0); - return __sbus_iommu_map_page(dev, page, offset, len, false); + return __sbus_iommu_map_phys(dev, phys, len, false, attrs); } =20 -static dma_addr_t sbus_iommu_map_page_pflush(struct device *dev, - struct page *page, unsigned long offset, size_t len, - enum dma_data_direction dir, unsigned long attrs) +static dma_addr_t sbus_iommu_map_phys_pflush(struct device *dev, + phys_addr_t phys, size_t len, enum dma_data_direction dir, + unsigned long attrs) { - return __sbus_iommu_map_page(dev, page, offset, len, true); + return __sbus_iommu_map_phys(dev, phys, len, true, attrs); } =20 static int __sbus_iommu_map_sg(struct device *dev, struct scatterlist *sgl, @@ -254,8 +256,8 @@ static int __sbus_iommu_map_sg(struct device *dev, stru= ct scatterlist *sgl, int j; =20 for_each_sg(sgl, sg, nents, j) { - sg->dma_address =3D__sbus_iommu_map_page(dev, sg_page(sg), - sg->offset, sg->length, per_page_flush); + sg->dma_address =3D __sbus_iommu_map_phys(dev, sg_phys(sg), + sg->length, per_page_flush, attrs); if (sg->dma_address =3D=3D DMA_MAPPING_ERROR) return -EIO; sg->dma_length =3D sg->length; @@ -277,7 +279,7 @@ static int sbus_iommu_map_sg_pflush(struct device *dev,= struct scatterlist *sgl, return __sbus_iommu_map_sg(dev, sgl, nents, dir, attrs, true); } =20 -static void sbus_iommu_unmap_page(struct device *dev, dma_addr_t dma_addr, +static void sbus_iommu_unmap_phys(struct device *dev, dma_addr_t dma_addr, size_t len, enum dma_data_direction dir, unsigned long attrs) { struct iommu_struct *iommu =3D dev->archdata.iommu; @@ -303,7 +305,7 @@ static void sbus_iommu_unmap_sg(struct device *dev, str= uct scatterlist *sgl, int i; =20 for_each_sg(sgl, sg, nents, i) { - sbus_iommu_unmap_page(dev, sg->dma_address, sg->length, dir, + sbus_iommu_unmap_phys(dev, sg->dma_address, sg->length, dir, attrs); sg->dma_address =3D 0x21212121; } @@ -426,8 +428,8 @@ static const struct dma_map_ops sbus_iommu_dma_gflush_o= ps =3D { .alloc =3D sbus_iommu_alloc, .free =3D sbus_iommu_free, #endif - .map_page =3D sbus_iommu_map_page_gflush, - .unmap_page =3D sbus_iommu_unmap_page, + .map_phys =3D sbus_iommu_map_phys_gflush, + .unmap_phys =3D sbus_iommu_unmap_phys, .map_sg =3D sbus_iommu_map_sg_gflush, .unmap_sg =3D sbus_iommu_unmap_sg, }; @@ -437,8 +439,8 @@ static const struct dma_map_ops sbus_iommu_dma_pflush_o= ps =3D { .alloc =3D sbus_iommu_alloc, .free =3D sbus_iommu_free, #endif - .map_page =3D sbus_iommu_map_page_pflush, - .unmap_page =3D sbus_iommu_unmap_page, + .map_phys =3D sbus_iommu_map_phys_pflush, + .unmap_phys =3D sbus_iommu_unmap_phys, .map_sg =3D sbus_iommu_map_sg_pflush, .unmap_sg =3D sbus_iommu_unmap_sg, }; --=20 2.51.0 From nobody Wed Oct 1 22:32:41 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2D66922D9E9; Sun, 28 Sep 2025 15:02:59 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1759071780; cv=none; b=aC8w0tmH9stMKboWZTYYtr9ez7GWDzyHw3HNsUnxBYF1IiML1OB1oRnUDrIrCU0djVoP9TqtjTX8ktgYnQSxO/4qxhR89aEZjmmJuYcyJMNTNCBcLqMitJQVnjvGtoTPtt8jp/N/6W5pRzwLjJrAFT59/cJxbsqbi+zd9++MrSE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1759071780; c=relaxed/simple; bh=rTsxFZePz2yuX+NtfO2kDwccU3HnGAbQvISKF3+oa1o=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=rWF0tIWoVwyq2n8fne0XDA1wgTixynxgoRm4HN29GhsLOWddmP0qIYNKlhbn0adg/OeU3WS7gEWmcBV0oLYXElLxAeEj+uJsbVIVsooJf6QyYKbU7AYMC0vv7B3fbx8DgLMSAIVKuhfNTN481Njxa0QZ/Bf9ylI0DUZVA1waFG0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=gLZHsFQQ; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="gLZHsFQQ" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9A7E5C4CEF0; Sun, 28 Sep 2025 15:02:58 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1759071779; bh=rTsxFZePz2yuX+NtfO2kDwccU3HnGAbQvISKF3+oa1o=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=gLZHsFQQ8HQtfOqWgjtzTUBMSHZQ3R4/SOszE4+XoG4l/f2FLRJbyk9ezRQXXc2h8 ONJjl8yAKs3hxu8eNi4l+ERA0I3gjnJVzz0YroJWkBOomuoob/TeHDNUoV+39jEb6I rmjhiT1PTSLErsackYjS82wmKYR5VC0J1GfIDBfN1M8bl4dVJUZKKtkNL9qsVq9J8i z2rb1M+mt3pTqkh1QBhu87GzTbDK00LU0Pp8n65VlnXBK70s7NVf1n/VoKncnswPQE fGBCpVF6nMJFCLyCcZjEduuiExQJxRrhtAfRZpFV+iZrHKlwqoLeHvuBYUekHkubmz Jkaop5Zi+ew4Q== From: Leon Romanovsky To: Marek Szyprowski Cc: Leon Romanovsky , Jason Gunthorpe , Andreas Larsson , Borislav Petkov , Dave Hansen , "David S. Miller" , Geoff Levand , Helge Deller , Ingo Molnar , iommu@lists.linux.dev, "James E.J. Bottomley" , Jason Wang , Juergen Gross , linux-alpha@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mips@vger.kernel.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, Madhavan Srinivasan , Matt Turner , Michael Ellerman , "Michael S. Tsirkin" , Richard Henderson , sparclinux@vger.kernel.org, Stefano Stabellini , Thomas Bogendoerfer , Thomas Gleixner , virtualization@lists.linux.dev, x86@kernel.org, xen-devel@lists.xenproject.org, Magnus Lindholm Subject: [PATCH v1 6/9] x86: Use physical address for DMA mapping Date: Sun, 28 Sep 2025 18:02:26 +0300 Message-ID: X-Mailer: git-send-email 2.51.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Leon Romanovsky Perform mechanical conversion from DMA .map_page to .map_phys. Signed-off-by: Leon Romanovsky --- arch/x86/kernel/amd_gart_64.c | 19 ++++++++++--------- 1 file changed, 10 insertions(+), 9 deletions(-) diff --git a/arch/x86/kernel/amd_gart_64.c b/arch/x86/kernel/amd_gart_64.c index 3485d419c2f5..f1ffdc0e4a3a 100644 --- a/arch/x86/kernel/amd_gart_64.c +++ b/arch/x86/kernel/amd_gart_64.c @@ -222,13 +222,14 @@ static dma_addr_t dma_map_area(struct device *dev, dm= a_addr_t phys_mem, } =20 /* Map a single area into the IOMMU */ -static dma_addr_t gart_map_page(struct device *dev, struct page *page, - unsigned long offset, size_t size, - enum dma_data_direction dir, +static dma_addr_t gart_map_phys(struct device *dev, phys_addr_t paddr, + size_t size, enum dma_data_direction dir, unsigned long attrs) { unsigned long bus; - phys_addr_t paddr =3D page_to_phys(page) + offset; + + if (attrs & DMA_ATTR_MMIO) + return DMA_MAPPING_ERROR; =20 if (!need_iommu(dev, paddr, size)) return paddr; @@ -242,7 +243,7 @@ static dma_addr_t gart_map_page(struct device *dev, str= uct page *page, /* * Free a DMA mapping. */ -static void gart_unmap_page(struct device *dev, dma_addr_t dma_addr, +static void gart_unmap_phys(struct device *dev, dma_addr_t dma_addr, size_t size, enum dma_data_direction dir, unsigned long attrs) { @@ -282,7 +283,7 @@ static void gart_unmap_sg(struct device *dev, struct sc= atterlist *sg, int nents, for_each_sg(sg, s, nents, i) { if (!s->dma_length || !s->length) break; - gart_unmap_page(dev, s->dma_address, s->dma_length, dir, 0); + gart_unmap_phys(dev, s->dma_address, s->dma_length, dir, 0); } } =20 @@ -487,7 +488,7 @@ static void gart_free_coherent(struct device *dev, size_t size, void *vaddr, dma_addr_t dma_addr, unsigned long attrs) { - gart_unmap_page(dev, dma_addr, size, DMA_BIDIRECTIONAL, 0); + gart_unmap_phys(dev, dma_addr, size, DMA_BIDIRECTIONAL, 0); dma_direct_free(dev, size, vaddr, dma_addr, attrs); } =20 @@ -668,8 +669,8 @@ static __init int init_amd_gatt(struct agp_kern_info *i= nfo) static const struct dma_map_ops gart_dma_ops =3D { .map_sg =3D gart_map_sg, .unmap_sg =3D gart_unmap_sg, - .map_page =3D gart_map_page, - .unmap_page =3D gart_unmap_page, + .map_phys =3D gart_map_phys, + .unmap_phys =3D gart_unmap_phys, .alloc =3D gart_alloc_coherent, .free =3D gart_free_coherent, .mmap =3D dma_common_mmap, --=20 2.51.0 From nobody Wed Oct 1 22:32:41 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9D65923C506; Sun, 28 Sep 2025 15:03:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1759071783; cv=none; b=p+p7wJfJqswBYn1UmkSxSGkwdbT1Y28L8vrkir5JyumHrIa6Flk2r1qrwR6bMvyre0SzGmQC/8gZEdCXa5CaK2RknfaWIs8BPmbha5rGpVCFxU24r9BthbyvB8EaM/lX4/CUKU9NJNGls7/gcMLUZZkstGHl32Bvm5qQ6eTk9wc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1759071783; c=relaxed/simple; bh=Ij0uTy1OlJGAg7lnkUtDmxO01shTBaaj+uIPzSERgrU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=JLDelzh44WUy76Mx/I7cFpskSEvvStfffHRWYHKw7frzqEcF1iJZYB+Iwl1IavLra7JdmAEJoSSM0ZPU47BrX4iBnZEseL5abXGLmsAd0FYdLv3jp4BpJKZJJ8OlX9EUlF/UCRkuQWKX5pG/1s6OpZddDehjm0KeyeGusjJVRc0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=fY7jKenQ; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="fY7jKenQ" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8A390C4CEF0; Sun, 28 Sep 2025 15:03:02 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1759071783; bh=Ij0uTy1OlJGAg7lnkUtDmxO01shTBaaj+uIPzSERgrU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=fY7jKenQggaIx2jtjZuB6uQLISTNufTm8rMgGJzO07IYL8vO6GWLJYx3TECVsXMXN wBFOb8xnO78E31d0k727L1YDuVOkZOwLaN14A38UC4OIl5z6NwYopVhauUQgN4uYjd Z1j0Hzf/aow0u91fDEfE2ibxWJkzECaGulmiSQijgWdmZvmBETmsJsFYG01wE9kV2V gMBiAQh0oxrYUoXayv5CaRb11u4XY/2FzJUdgEhJ9m38AxCZiSOdfCYSFId/skTbqm odl03w8FwLQt7lISHmEvTFzKcnJaFoaDlCKkL8sjMV3FOlwH3nlebDL4QRSHUhfx6m R/MlI/Ps05Isg== From: Leon Romanovsky To: Marek Szyprowski Cc: Leon Romanovsky , Jason Gunthorpe , Andreas Larsson , Borislav Petkov , Dave Hansen , "David S. Miller" , Geoff Levand , Helge Deller , Ingo Molnar , iommu@lists.linux.dev, "James E.J. Bottomley" , Jason Wang , Juergen Gross , linux-alpha@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mips@vger.kernel.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, Madhavan Srinivasan , Matt Turner , Michael Ellerman , "Michael S. Tsirkin" , Richard Henderson , sparclinux@vger.kernel.org, Stefano Stabellini , Thomas Bogendoerfer , Thomas Gleixner , virtualization@lists.linux.dev, x86@kernel.org, xen-devel@lists.xenproject.org, Magnus Lindholm Subject: [PATCH v1 7/9] vdpa: Convert to physical address DMA mapping Date: Sun, 28 Sep 2025 18:02:27 +0300 Message-ID: X-Mailer: git-send-email 2.51.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Leon Romanovsky Use physical address directly in DMA mapping flow. Signed-off-by: Leon Romanovsky --- drivers/vdpa/vdpa_user/iova_domain.c | 11 +++++------ drivers/vdpa/vdpa_user/iova_domain.h | 8 ++++---- drivers/vdpa/vdpa_user/vduse_dev.c | 18 ++++++++++-------- 3 files changed, 19 insertions(+), 18 deletions(-) diff --git a/drivers/vdpa/vdpa_user/iova_domain.c b/drivers/vdpa/vdpa_user/= iova_domain.c index 58116f89d8da..c0ecf01003cd 100644 --- a/drivers/vdpa/vdpa_user/iova_domain.c +++ b/drivers/vdpa/vdpa_user/iova_domain.c @@ -396,17 +396,16 @@ void vduse_domain_sync_single_for_cpu(struct vduse_io= va_domain *domain, read_unlock(&domain->bounce_lock); } =20 -dma_addr_t vduse_domain_map_page(struct vduse_iova_domain *domain, - struct page *page, unsigned long offset, - size_t size, enum dma_data_direction dir, +dma_addr_t vduse_domain_map_phys(struct vduse_iova_domain *domain, + phys_addr_t pa, size_t size, + enum dma_data_direction dir, unsigned long attrs) { struct iova_domain *iovad =3D &domain->stream_iovad; unsigned long limit =3D domain->bounce_size - 1; - phys_addr_t pa =3D page_to_phys(page) + offset; dma_addr_t iova =3D vduse_domain_alloc_iova(iovad, size, limit); =20 - if (!iova) + if (!iova || (attrs & DMA_ATTR_MMIO)) return DMA_MAPPING_ERROR; =20 if (vduse_domain_init_bounce_map(domain)) @@ -430,7 +429,7 @@ dma_addr_t vduse_domain_map_page(struct vduse_iova_doma= in *domain, return DMA_MAPPING_ERROR; } =20 -void vduse_domain_unmap_page(struct vduse_iova_domain *domain, +void vduse_domain_unmap_phys(struct vduse_iova_domain *domain, dma_addr_t dma_addr, size_t size, enum dma_data_direction dir, unsigned long attrs) { diff --git a/drivers/vdpa/vdpa_user/iova_domain.h b/drivers/vdpa/vdpa_user/= iova_domain.h index 7f3f0928ec78..7c4546fd856a 100644 --- a/drivers/vdpa/vdpa_user/iova_domain.h +++ b/drivers/vdpa/vdpa_user/iova_domain.h @@ -53,12 +53,12 @@ void vduse_domain_sync_single_for_cpu(struct vduse_iova= _domain *domain, dma_addr_t dma_addr, size_t size, enum dma_data_direction dir); =20 -dma_addr_t vduse_domain_map_page(struct vduse_iova_domain *domain, - struct page *page, unsigned long offset, - size_t size, enum dma_data_direction dir, +dma_addr_t vduse_domain_map_phys(struct vduse_iova_domain *domain, + phys_addr_t phys, size_t size, + enum dma_data_direction dir, unsigned long attrs); =20 -void vduse_domain_unmap_page(struct vduse_iova_domain *domain, +void vduse_domain_unmap_phys(struct vduse_iova_domain *domain, dma_addr_t dma_addr, size_t size, enum dma_data_direction dir, unsigned long attrs); =20 diff --git a/drivers/vdpa/vdpa_user/vduse_dev.c b/drivers/vdpa/vdpa_user/vd= use_dev.c index 04620bb77203..75aa3c9f83fb 100644 --- a/drivers/vdpa/vdpa_user/vduse_dev.c +++ b/drivers/vdpa/vdpa_user/vduse_dev.c @@ -834,25 +834,27 @@ static void vduse_dev_sync_single_for_cpu(struct devi= ce *dev, vduse_domain_sync_single_for_cpu(domain, dma_addr, size, dir); } =20 -static dma_addr_t vduse_dev_map_page(struct device *dev, struct page *page, - unsigned long offset, size_t size, - enum dma_data_direction dir, +static dma_addr_t vduse_dev_map_phys(struct device *dev, phys_addr_t phys, + size_t size, enum dma_data_direction dir, unsigned long attrs) { struct vduse_dev *vdev =3D dev_to_vduse(dev); struct vduse_iova_domain *domain =3D vdev->domain; =20 - return vduse_domain_map_page(domain, page, offset, size, dir, attrs); + if (attrs & DMA_ATTR_MMIO) + return DMA_MAPPING_ERROR; + + return vduse_domain_map_phys(domain, phys, size, dir, attrs); } =20 -static void vduse_dev_unmap_page(struct device *dev, dma_addr_t dma_addr, +static void vduse_dev_unmap_phys(struct device *dev, dma_addr_t dma_addr, size_t size, enum dma_data_direction dir, unsigned long attrs) { struct vduse_dev *vdev =3D dev_to_vduse(dev); struct vduse_iova_domain *domain =3D vdev->domain; =20 - return vduse_domain_unmap_page(domain, dma_addr, size, dir, attrs); + return vduse_domain_unmap_phys(domain, dma_addr, size, dir, attrs); } =20 static void *vduse_dev_alloc_coherent(struct device *dev, size_t size, @@ -896,8 +898,8 @@ static size_t vduse_dev_max_mapping_size(struct device = *dev) static const struct dma_map_ops vduse_dev_dma_ops =3D { .sync_single_for_device =3D vduse_dev_sync_single_for_device, .sync_single_for_cpu =3D vduse_dev_sync_single_for_cpu, - .map_page =3D vduse_dev_map_page, - .unmap_page =3D vduse_dev_unmap_page, + .map_phys =3D vduse_dev_map_phys, + .unmap_phys =3D vduse_dev_unmap_phys, .alloc =3D vduse_dev_alloc_coherent, .free =3D vduse_dev_free_coherent, .max_mapping_size =3D vduse_dev_max_mapping_size, --=20 2.51.0 From nobody Wed Oct 1 22:32:41 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 311AD23183F; Sun, 28 Sep 2025 15:03:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1759071787; cv=none; b=c83JWQ0QD+KdGp2A7swiK8vAgBwGGYXEm0jeqAmsh7VoF9zzFmpth1MSUF2wYlJqBvmAB8YFmbNkNVf40AqC/4xRB6wkC2BN4RatilcsJcxCUJKp6qgb/S0RIVTExMVcrN3w51NrnJGBRMR66AVqX90hIHg7Ol+aw4kEhIV4Ue0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1759071787; c=relaxed/simple; bh=4MEtXfdS28rAh2K80VhDxfAqakr0Nc2oB0/xCEv8vjk=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=JVUO0mtZlT0KI2kF0lju/eeTPQ6yjElh9vmOw6PhqJTFOpAGRj3JJkabTxnxUeX+OmoBJWlIV+1rnywoT84KeKTh6ExBWkU6G/CkeAnxn6QDPw7orGGr6LXc063qS1EX4iNH7tWqulFo4w2NMh1aBiEtB11HVoEThcKVGaabrtU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=MLl/BjLp; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="MLl/BjLp" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 19E28C4CEF0; Sun, 28 Sep 2025 15:03:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1759071786; bh=4MEtXfdS28rAh2K80VhDxfAqakr0Nc2oB0/xCEv8vjk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=MLl/BjLpSDt9SBh2norJcU2F46F/r6g6h87RuQBYkoAGcZPnCd215FCQzpvgxERCA JoAcWDlpaaGMBeilORD02wS46EGf5rRgJdiyxnTdl/dSUOOThvXNKsn2RYd2BOpQtn z6ab9Oz6SBGmGCyKrwpxkOF/E4FINpdFsv5bLV9vRy30LirIChJsKF36dmPdg/bAIF IwPNo8c23EdJkconmObCTIOoESZ2J20hDLi0fpZt7g0Y3Geyi0wHUvkyVUleo7ZhH3 UGMgKulTpskho2dGEbm7PUSwJmGYxd5S4PBVSl8pNrZ4ST3LrqqCETAWzyt5QLfi+Q 4J6w++XMD0tWg== From: Leon Romanovsky To: Marek Szyprowski Cc: Leon Romanovsky , Jason Gunthorpe , Andreas Larsson , Borislav Petkov , Dave Hansen , "David S. Miller" , Geoff Levand , Helge Deller , Ingo Molnar , iommu@lists.linux.dev, "James E.J. Bottomley" , Jason Wang , Juergen Gross , linux-alpha@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mips@vger.kernel.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, Madhavan Srinivasan , Matt Turner , Michael Ellerman , "Michael S. Tsirkin" , Richard Henderson , sparclinux@vger.kernel.org, Stefano Stabellini , Thomas Bogendoerfer , Thomas Gleixner , virtualization@lists.linux.dev, x86@kernel.org, xen-devel@lists.xenproject.org, Magnus Lindholm Subject: [PATCH v1 8/9] xen: swiotlb: Convert mapping routine to rely on physical address Date: Sun, 28 Sep 2025 18:02:28 +0300 Message-ID: <573fbadd743851838a91a8dbc84b4506cea2192c.1759071169.git.leon@kernel.org> X-Mailer: git-send-email 2.51.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Leon Romanovsky Switch to .map_phys callback instead of .map_page. Signed-off-by: Leon Romanovsky --- drivers/xen/grant-dma-ops.c | 20 ++++++++++++-------- 1 file changed, 12 insertions(+), 8 deletions(-) diff --git a/drivers/xen/grant-dma-ops.c b/drivers/xen/grant-dma-ops.c index 29257d2639db..7f76e516fe24 100644 --- a/drivers/xen/grant-dma-ops.c +++ b/drivers/xen/grant-dma-ops.c @@ -163,18 +163,22 @@ static void xen_grant_dma_free_pages(struct device *d= ev, size_t size, xen_grant_dma_free(dev, size, page_to_virt(vaddr), dma_handle, 0); } =20 -static dma_addr_t xen_grant_dma_map_page(struct device *dev, struct page *= page, - unsigned long offset, size_t size, +static dma_addr_t xen_grant_dma_map_phys(struct device *dev, phys_addr_t p= hys, + size_t size, enum dma_data_direction dir, unsigned long attrs) { struct xen_grant_dma_data *data; + unsigned long offset =3D offset_in_page(phys); unsigned long dma_offset =3D xen_offset_in_page(offset), pfn_offset =3D XEN_PFN_DOWN(offset); unsigned int i, n_pages =3D XEN_PFN_UP(dma_offset + size); grant_ref_t grant; dma_addr_t dma_handle; =20 + if (attrs & DMA_ATTR_MMIO) + return DMA_MAPPING_ERROR; + if (WARN_ON(dir =3D=3D DMA_NONE)) return DMA_MAPPING_ERROR; =20 @@ -190,7 +194,7 @@ static dma_addr_t xen_grant_dma_map_page(struct device = *dev, struct page *page, =20 for (i =3D 0; i < n_pages; i++) { gnttab_grant_foreign_access_ref(grant + i, data->backend_domid, - pfn_to_gfn(page_to_xen_pfn(page) + i + pfn_offset), + pfn_to_gfn(page_to_xen_pfn(phys_to_page(phys)) + i + pfn_offset), dir =3D=3D DMA_TO_DEVICE); } =20 @@ -199,7 +203,7 @@ static dma_addr_t xen_grant_dma_map_page(struct device = *dev, struct page *page, return dma_handle; } =20 -static void xen_grant_dma_unmap_page(struct device *dev, dma_addr_t dma_ha= ndle, +static void xen_grant_dma_unmap_phys(struct device *dev, dma_addr_t dma_ha= ndle, size_t size, enum dma_data_direction dir, unsigned long attrs) { @@ -242,7 +246,7 @@ static void xen_grant_dma_unmap_sg(struct device *dev, = struct scatterlist *sg, return; =20 for_each_sg(sg, s, nents, i) - xen_grant_dma_unmap_page(dev, s->dma_address, sg_dma_len(s), dir, + xen_grant_dma_unmap_phys(dev, s->dma_address, sg_dma_len(s), dir, attrs); } =20 @@ -257,7 +261,7 @@ static int xen_grant_dma_map_sg(struct device *dev, str= uct scatterlist *sg, return -EINVAL; =20 for_each_sg(sg, s, nents, i) { - s->dma_address =3D xen_grant_dma_map_page(dev, sg_page(s), s->offset, + s->dma_address =3D xen_grant_dma_map_phys(dev, sg_phys(s), s->length, dir, attrs); if (s->dma_address =3D=3D DMA_MAPPING_ERROR) goto out; @@ -286,8 +290,8 @@ static const struct dma_map_ops xen_grant_dma_ops =3D { .free_pages =3D xen_grant_dma_free_pages, .mmap =3D dma_common_mmap, .get_sgtable =3D dma_common_get_sgtable, - .map_page =3D xen_grant_dma_map_page, - .unmap_page =3D xen_grant_dma_unmap_page, + .map_phys =3D xen_grant_dma_map_phys, + .unmap_phys =3D xen_grant_dma_unmap_phys, .map_sg =3D xen_grant_dma_map_sg, .unmap_sg =3D xen_grant_dma_unmap_sg, .dma_supported =3D xen_grant_dma_supported, --=20 2.51.0 From nobody Wed Oct 1 22:32:41 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0ACFA242910; Sun, 28 Sep 2025 15:03:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1759071795; cv=none; b=Leds09qzvLBZo4t4rxNlECMJ1lWI/B9WJ7uknEJ+W/3IP0qKSmXpNA0YUPE0cM7d/O9Pwy2H32ccBQM6p1vlOYYnSg/G76Lv7ntuAtYmiLp1F5YmJxNyYQ9zafPMY5SabKvlR61y7VzjxgyD8y0SuzACS9E8BaCi48VN7GJWTj0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1759071795; c=relaxed/simple; bh=Nw/X8vX+8f0l2pbMmARhDnsF0lUGz2K7mDJQIwz58Is=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=ab3tPgRjnRElIfquSzYo5FRSVOplFWcJuGN48bNDPrF9MNh2jj3XnXU5uzvkNfMUvk7bGoOFKIGx4ayVvh5NJ2avYsiHrdb/L2emy2r6qYHjs/Dkz1JrbehwS620xTceO/fALgJIghSr3yraeNroRfJJN4qHeB6Xut8xbrHnQhc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=FsaEhpye; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="FsaEhpye" Received: by smtp.kernel.org (Postfix) with ESMTPSA id E1811C116C6; Sun, 28 Sep 2025 15:03:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1759071794; bh=Nw/X8vX+8f0l2pbMmARhDnsF0lUGz2K7mDJQIwz58Is=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=FsaEhpyeGH3xgtFa1Y8w5h7Ts5l82ievUD0lfLRGlqV4tl7HVv9bjHbYliznmCWjs XIswOwbrLhL/lqBZDDpgUGa8gdF4R6bcCbJBPSqa9GRkFwHJMd2nGn9jeZAzizyOfu 54DTvGsE/K3wvtVyzMCHvcvGz9R9dmQ3OT+/w0ZCM3FzAhtlqzlY+kCeIKxno1O0Od C1Ih5862x6UMYSCAAaLZ2E1CW3M0tASDzvlC2jUWGNlTgPEIUjab8JpmERGcGq50t3 CdD1LnfZcH17lDECmZ4j1WinLIfELZhHuaYwgHhdXOlf+ghx9/qq03uAwI9Cvhcn38 JtS6nCcss5SzQ== From: Leon Romanovsky To: Marek Szyprowski Cc: Leon Romanovsky , Jason Gunthorpe , Andreas Larsson , Borislav Petkov , Dave Hansen , "David S. Miller" , Geoff Levand , Helge Deller , Ingo Molnar , iommu@lists.linux.dev, "James E.J. Bottomley" , Jason Wang , Juergen Gross , linux-alpha@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mips@vger.kernel.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, Madhavan Srinivasan , Matt Turner , Michael Ellerman , "Michael S. Tsirkin" , Richard Henderson , sparclinux@vger.kernel.org, Stefano Stabellini , Thomas Bogendoerfer , Thomas Gleixner , virtualization@lists.linux.dev, x86@kernel.org, xen-devel@lists.xenproject.org, Magnus Lindholm Subject: [PATCH v1 9/9] dma-mapping: remove unused map_page callback Date: Sun, 28 Sep 2025 18:02:29 +0300 Message-ID: <27727b8ef9b3ad55a3a28f9622a62561c9988335.1759071169.git.leon@kernel.org> X-Mailer: git-send-email 2.51.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Leon Romanovsky After conversion of arch code to use physical address mapping, there are no users of .map_page() and .unmap_page() callbacks, so let's remove them. Signed-off-by: Leon Romanovsky --- include/linux/dma-map-ops.h | 7 ------- kernel/dma/mapping.c | 12 ------------ kernel/dma/ops_helpers.c | 8 +------- 3 files changed, 1 insertion(+), 26 deletions(-) diff --git a/include/linux/dma-map-ops.h b/include/linux/dma-map-ops.h index a2ec1566aa27..e0a78991fa8a 100644 --- a/include/linux/dma-map-ops.h +++ b/include/linux/dma-map-ops.h @@ -31,13 +31,6 @@ struct dma_map_ops { void *cpu_addr, dma_addr_t dma_addr, size_t size, unsigned long attrs); =20 - dma_addr_t (*map_page)(struct device *dev, struct page *page, - unsigned long offset, size_t size, - enum dma_data_direction dir, unsigned long attrs); - void (*unmap_page)(struct device *dev, dma_addr_t dma_handle, - size_t size, enum dma_data_direction dir, - unsigned long attrs); - dma_addr_t (*map_phys)(struct device *dev, phys_addr_t phys, size_t size, enum dma_data_direction dir, unsigned long attrs); diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c index 32a85bfdf873..37163eb49f9f 100644 --- a/kernel/dma/mapping.c +++ b/kernel/dma/mapping.c @@ -171,16 +171,6 @@ dma_addr_t dma_map_phys(struct device *dev, phys_addr_= t phys, size_t size, addr =3D iommu_dma_map_phys(dev, phys, size, dir, attrs); else if (ops->map_phys) addr =3D ops->map_phys(dev, phys, size, dir, attrs); - else if (!is_mmio && ops->map_page) { - struct page *page =3D phys_to_page(phys); - size_t offset =3D offset_in_page(phys); - - /* - * The dma_ops API contract for ops->map_page() requires - * kmappable memory. - */ - addr =3D ops->map_page(dev, page, offset, size, dir, attrs); - } =20 if (!is_mmio) kmsan_handle_dma(phys, size, dir); @@ -222,8 +212,6 @@ void dma_unmap_phys(struct device *dev, dma_addr_t addr= , size_t size, iommu_dma_unmap_phys(dev, addr, size, dir, attrs); else if (ops->unmap_phys) ops->unmap_phys(dev, addr, size, dir, attrs); - else - ops->unmap_page(dev, addr, size, dir, attrs); trace_dma_unmap_phys(dev, addr, size, dir, attrs); debug_dma_unmap_phys(dev, addr, size, dir); } diff --git a/kernel/dma/ops_helpers.c b/kernel/dma/ops_helpers.c index 1eccbdbc99c1..20caf9cabf69 100644 --- a/kernel/dma/ops_helpers.c +++ b/kernel/dma/ops_helpers.c @@ -76,11 +76,8 @@ struct page *dma_common_alloc_pages(struct device *dev, = size_t size, if (use_dma_iommu(dev)) *dma_handle =3D iommu_dma_map_phys(dev, phys, size, dir, DMA_ATTR_SKIP_CPU_SYNC); - else if (ops->map_phys) - *dma_handle =3D ops->map_phys(dev, phys, size, dir, - DMA_ATTR_SKIP_CPU_SYNC); else - *dma_handle =3D ops->map_page(dev, page, 0, size, dir, + *dma_handle =3D ops->map_phys(dev, phys, size, dir, DMA_ATTR_SKIP_CPU_SYNC); if (*dma_handle =3D=3D DMA_MAPPING_ERROR) { dma_free_contiguous(dev, page, size); @@ -102,8 +99,5 @@ void dma_common_free_pages(struct device *dev, size_t si= ze, struct page *page, else if (ops->unmap_phys) ops->unmap_phys(dev, dma_handle, size, dir, DMA_ATTR_SKIP_CPU_SYNC); - else if (ops->unmap_page) - ops->unmap_page(dev, dma_handle, size, dir, - DMA_ATTR_SKIP_CPU_SYNC); dma_free_contiguous(dev, page, size); } --=20 2.51.0