From nobody Thu Oct 30 22:49:45 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=quarantine dis=none) header.from=kernel.org ARC-Seal: i=1; a=rsa-sha256; t=1760519634; cv=none; d=zohomail.com; s=zohoarc; b=f37VnD4OohzFomxxSLHwimpgBwZYLbPMjAOnqnFdWGN0sMvn5/LJE2rl2oNyRoaoxN7GtQilNzlGbohi/IkE3g5sxVJX+Sh1OBZRLDOf3a1keHEX7PiwWZuiHjIGWXpmdPZniCnhVNlOUXCxRP7K/PAToCM++SLDzzhxBcGhvww= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1760519634; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=Li9Haq3lbAYcxxghlSIeDYQcPLVs1DBkFkgoEN9vaKw=; b=hpWqa1huCaZYtIZuDnCokn2aUgf0qw9ouAfGdgwg7oVR/JKFPQLm60z+2M25LGBOXNJvtJy5U4WUU8qiWa1yVnme87h/r5LdsGMcEjJ+EWFOvIclkY0g8NUXJmScAmTNT9N13f0sTDD69tbXYMOt6cjwQk+re8zbjhOVICdRc1k= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=quarantine dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 176051963404216.195727781300775; Wed, 15 Oct 2025 02:13:54 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.1143396.1477160 (Exim 4.92) (envelope-from ) id 1v8xZd-0004WP-Br; Wed, 15 Oct 2025 09:13:33 +0000 Received: by outflank-mailman (output) from mailman id 1143396.1477160; Wed, 15 Oct 2025 09:13:33 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1v8xZd-0004WB-7e; Wed, 15 Oct 2025 09:13:33 +0000 Received: by outflank-mailman (input) for mailman id 1143396; Wed, 15 Oct 2025 09:13:32 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1v8xZc-0002lR-8u for xen-devel@lists.xenproject.org; Wed, 15 Oct 2025 09:13:32 +0000 Received: from sea.source.kernel.org (sea.source.kernel.org [2600:3c0a:e001:78e:0:1991:8:25]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 377a3d5d-a9a7-11f0-9d15-b5c5bf9af7f9; Wed, 15 Oct 2025 11:13:31 +0200 (CEST) Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sea.source.kernel.org (Postfix) with ESMTP id 1D72F4A2FA; Wed, 15 Oct 2025 09:13:30 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 4C208C4CEF9; Wed, 15 Oct 2025 09:13:29 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 377a3d5d-a9a7-11f0-9d15-b5c5bf9af7f9 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1760519610; bh=xkVzkIBhQQs+OFyzgah2MFOlPHbQvEAh/4FzKyJVicY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Y3t7ehyIT/NlN8L+dYpzHFfN6zyIupuwB48mHivuKnfeqHj7ieKKf/bKb6e2wzQBl 7ep3Q/Q/LIf3dDbwHkIrMphHnObT+41DC3Fgo+k6dOyDbYYAqaRSopAsvbRnb0uLv4 x9cwbhpeKvt4yMX+XCUS0GECEIsJhCSROjiaMhxZibw5n/bN1y2vqjYxP6QGJ6qQoY HZMrZaZkEXlEew59wpjWmAIe/yjEXCWQ0v29XoSJVrAGVba0fAszuLOkckEj4cwcxI 1+dTAwC+hwwf6RaxFXwlpbl+D/N1WB8kMJEYPn1gfu7ootqy1mDJHdBDoepcpmUq8W t1N0Td+msND+g== From: Leon Romanovsky To: Marek Szyprowski , Robin Murphy , Russell King , Juergen Gross , Stefano Stabellini , Oleksandr Tyshchenko , Richard Henderson , Matt Turner , Thomas Bogendoerfer , "James E.J. Bottomley" , Helge Deller , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Geoff Levand , "David S. Miller" , Andreas Larsson , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" Cc: iommu@lists.linux.dev, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, xen-devel@lists.xenproject.org, linux-alpha@vger.kernel.org, linux-mips@vger.kernel.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, sparclinux@vger.kernel.org, Magnus Lindholm , Jason Gunthorpe , Jason Gunthorpe Subject: [PATCH v5 07/14] alpha: Convert mapping routine to rely on physical address Date: Wed, 15 Oct 2025 12:12:53 +0300 Message-ID: <20251015-remove-map-page-v5-7-3bbfe3a25cdf@kernel.org> X-Mailer: git-send-email 2.51.0 In-Reply-To: <20251015-remove-map-page-v5-0-3bbfe3a25cdf@kernel.org> References: <20251015-remove-map-page-v5-0-3bbfe3a25cdf@kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" X-Mailer: b4 0.15-dev Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @kernel.org) X-ZM-MESSAGEID: 1760519635718154100 From: Leon Romanovsky Alpha doesn't need struct *page and can perform mapping based on physical addresses. So convert it to implement new .map_phys callback. As part of this change, remove useless BUG_ON() as DMA mapping layer ensures that right direction is provided. Tested-by: Magnus Lindholm Reviewed-by: Jason Gunthorpe Signed-off-by: Leon Romanovsky --- arch/alpha/kernel/pci_iommu.c | 48 +++++++++++++++++++--------------------= ---- 1 file changed, 21 insertions(+), 27 deletions(-) diff --git a/arch/alpha/kernel/pci_iommu.c b/arch/alpha/kernel/pci_iommu.c index dc91de50f906..955b6ca61627 100644 --- a/arch/alpha/kernel/pci_iommu.c +++ b/arch/alpha/kernel/pci_iommu.c @@ -224,28 +224,26 @@ static int pci_dac_dma_supported(struct pci_dev *dev,= u64 mask) until either pci_unmap_single or pci_dma_sync_single is performed. */ =20 static dma_addr_t -pci_map_single_1(struct pci_dev *pdev, void *cpu_addr, size_t size, +pci_map_single_1(struct pci_dev *pdev, phys_addr_t paddr, size_t size, int dac_allowed) { struct pci_controller *hose =3D pdev ? pdev->sysdata : pci_isa_hose; dma_addr_t max_dma =3D pdev ? pdev->dma_mask : ISA_DMA_MASK; + unsigned long offset =3D offset_in_page(paddr); struct pci_iommu_arena *arena; long npages, dma_ofs, i; - unsigned long paddr; dma_addr_t ret; unsigned int align =3D 0; struct device *dev =3D pdev ? &pdev->dev : NULL; =20 - paddr =3D __pa(cpu_addr); - #if !DEBUG_NODIRECT /* First check to see if we can use the direct map window. */ if (paddr + size + __direct_map_base - 1 <=3D max_dma && paddr + size <=3D __direct_map_size) { ret =3D paddr + __direct_map_base; =20 - DBGA2("pci_map_single: [%p,%zx] -> direct %llx from %ps\n", - cpu_addr, size, ret, __builtin_return_address(0)); + DBGA2("pci_map_single: [%pa,%zx] -> direct %llx from %ps\n", + &paddr, size, ret, __builtin_return_address(0)); =20 return ret; } @@ -255,8 +253,8 @@ pci_map_single_1(struct pci_dev *pdev, void *cpu_addr, = size_t size, if (dac_allowed) { ret =3D paddr + alpha_mv.pci_dac_offset; =20 - DBGA2("pci_map_single: [%p,%zx] -> DAC %llx from %ps\n", - cpu_addr, size, ret, __builtin_return_address(0)); + DBGA2("pci_map_single: [%pa,%zx] -> DAC %llx from %ps\n", + &paddr, size, ret, __builtin_return_address(0)); =20 return ret; } @@ -290,10 +288,10 @@ pci_map_single_1(struct pci_dev *pdev, void *cpu_addr= , size_t size, arena->ptes[i + dma_ofs] =3D mk_iommu_pte(paddr); =20 ret =3D arena->dma_base + dma_ofs * PAGE_SIZE; - ret +=3D (unsigned long)cpu_addr & ~PAGE_MASK; + ret +=3D offset; =20 - DBGA2("pci_map_single: [%p,%zx] np %ld -> sg %llx from %ps\n", - cpu_addr, size, npages, ret, __builtin_return_address(0)); + DBGA2("pci_map_single: [%pa,%zx] np %ld -> sg %llx from %ps\n", + &paddr, size, npages, ret, __builtin_return_address(0)); =20 return ret; } @@ -322,19 +320,18 @@ static struct pci_dev *alpha_gendev_to_pci(struct dev= ice *dev) return NULL; } =20 -static dma_addr_t alpha_pci_map_page(struct device *dev, struct page *page, - unsigned long offset, size_t size, - enum dma_data_direction dir, +static dma_addr_t alpha_pci_map_phys(struct device *dev, phys_addr_t phys, + size_t size, enum dma_data_direction dir, unsigned long attrs) { struct pci_dev *pdev =3D alpha_gendev_to_pci(dev); int dac_allowed; =20 - BUG_ON(dir =3D=3D DMA_NONE); + if (unlikely(attrs & DMA_ATTR_MMIO)) + return DMA_MAPPING_ERROR; =20 - dac_allowed =3D pdev ? pci_dac_dma_supported(pdev, pdev->dma_mask) : 0;=20 - return pci_map_single_1(pdev, (char *)page_address(page) + offset,=20 - size, dac_allowed); + dac_allowed =3D pdev ? pci_dac_dma_supported(pdev, pdev->dma_mask) : 0; + return pci_map_single_1(pdev, phys, size, dac_allowed); } =20 /* Unmap a single streaming mode DMA translation. The DMA_ADDR and @@ -343,7 +340,7 @@ static dma_addr_t alpha_pci_map_page(struct device *dev= , struct page *page, the cpu to the buffer are guaranteed to see whatever the device wrote there. */ =20 -static void alpha_pci_unmap_page(struct device *dev, dma_addr_t dma_addr, +static void alpha_pci_unmap_phys(struct device *dev, dma_addr_t dma_addr, size_t size, enum dma_data_direction dir, unsigned long attrs) { @@ -353,8 +350,6 @@ static void alpha_pci_unmap_page(struct device *dev, dm= a_addr_t dma_addr, struct pci_iommu_arena *arena; long dma_ofs, npages; =20 - BUG_ON(dir =3D=3D DMA_NONE); - if (dma_addr >=3D __direct_map_base && dma_addr < __direct_map_base + __direct_map_size) { /* Nothing to do. */ @@ -429,7 +424,7 @@ static void *alpha_pci_alloc_coherent(struct device *de= v, size_t size, } memset(cpu_addr, 0, size); =20 - *dma_addrp =3D pci_map_single_1(pdev, cpu_addr, size, 0); + *dma_addrp =3D pci_map_single_1(pdev, virt_to_phys(cpu_addr), size, 0); if (*dma_addrp =3D=3D DMA_MAPPING_ERROR) { free_pages((unsigned long)cpu_addr, order); if (alpha_mv.mv_pci_tbi || (gfp & GFP_DMA)) @@ -643,9 +638,8 @@ static int alpha_pci_map_sg(struct device *dev, struct = scatterlist *sg, /* Fast path single entry scatterlists. */ if (nents =3D=3D 1) { sg->dma_length =3D sg->length; - sg->dma_address - =3D pci_map_single_1(pdev, SG_ENT_VIRT_ADDRESS(sg), - sg->length, dac_allowed); + sg->dma_address =3D pci_map_single_1(pdev, sg_phys(sg), + sg->length, dac_allowed); if (sg->dma_address =3D=3D DMA_MAPPING_ERROR) return -EIO; return 1; @@ -917,8 +911,8 @@ iommu_unbind(struct pci_iommu_arena *arena, long pg_sta= rt, long pg_count) const struct dma_map_ops alpha_pci_ops =3D { .alloc =3D alpha_pci_alloc_coherent, .free =3D alpha_pci_free_coherent, - .map_page =3D alpha_pci_map_page, - .unmap_page =3D alpha_pci_unmap_page, + .map_phys =3D alpha_pci_map_phys, + .unmap_phys =3D alpha_pci_unmap_phys, .map_sg =3D alpha_pci_map_sg, .unmap_sg =3D alpha_pci_unmap_sg, .dma_supported =3D alpha_pci_supported, --=20 2.51.0