From nobody Thu Oct 30 22:49:04 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=quarantine dis=none) header.from=kernel.org ARC-Seal: i=1; a=rsa-sha256; t=1760519786; cv=none; d=zohomail.com; s=zohoarc; b=UiMZxTdIyoCN4CFIvNRYeDE1rEFd76R9pR9BxaklfyygvgOuXEgCIAAieRRCbQ7P8AE0MocH/9H7n08vE9K1/n3aT2o3uSH+jTMgvpvmwo7FNKgs3c/dh99Y1bpAVLeAleJ312Cl5ZSu1r5QBZsT1lIfEkO8M65dBp636ocHANk= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1760519786; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=TJlRyQ1VJm/K2tuHDToeDl7+QCmOoL3L7U1K1RQjuyw=; b=UsxTdQTCDmilx+DtZqtUt/ME7Ep5sF86aeb0z98cOeWxXVmt290htsxJFF7/Kqn3vqf8abJqFid4ToytonBij8TLzBoThnUzUH2LFkvd/bW26kTVnXPwxgUhRQXdtto3L4tq04JApVmTTn/2+//93z7y7F37ofxwuiADCdVdC8I= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=quarantine dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1760519786362454.72281934552143; Wed, 15 Oct 2025 02:16:26 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.1143463.1477200 (Exim 4.92) (envelope-from ) id 1v8xcD-0007fL-Fg; Wed, 15 Oct 2025 09:16:13 +0000 Received: by outflank-mailman (output) from mailman id 1143463.1477200; Wed, 15 Oct 2025 09:16:13 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1v8xcD-0007fA-CP; Wed, 15 Oct 2025 09:16:13 +0000 Received: by outflank-mailman (input) for mailman id 1143463; Wed, 15 Oct 2025 09:16:12 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1v8xZr-0002lL-Mn for xen-devel@lists.xenproject.org; Wed, 15 Oct 2025 09:13:47 +0000 Received: from tor.source.kernel.org (tor.source.kernel.org [2600:3c04:e001:324:0:1991:8:25]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 401703d9-a9a7-11f0-980a-7dc792cee155; Wed, 15 Oct 2025 11:13:45 +0200 (CEST) Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by tor.source.kernel.org (Postfix) with ESMTP id 86F3C63BF7; Wed, 15 Oct 2025 09:13:44 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 833DCC4CEFE; Wed, 15 Oct 2025 09:13:43 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 401703d9-a9a7-11f0-980a-7dc792cee155 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1760519624; bh=VJMEonbngsbL6IpijrzIYHbjAMBroRqSUu3lz2uAAqo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=KczTkmrb6bYU8Tg9j49OYe6YFKsv5pKXc/7hhZGsvcfW/wQgfV3qMC+lUeGrd2hUc 5zpPw/ZMjilS2KzneKF6/BHue7NxPjjRokSrqt0egj3uQim/5oLmDLiV18WAxxuXRH AwmUbgF2eCa2HLDg30cM/Zq+hYSEmp9m66qrcC2r5aS2bBcWr+scUqtrtFkAz3ayhK jdrrQUkifKtfJkBj8o4umhVpXAysn34TBn3haaSXGmxtbq7dyLZthT2Arkgqqtnhit lpWDHniY0RhzMRDC1by7nUOQ3eiURectjDf0dLDCPYQvbXTMlwDVSffrfvAqOW7G12 S0FUxgeWE1dDA== From: Leon Romanovsky To: Marek Szyprowski , Robin Murphy , Russell King , Juergen Gross , Stefano Stabellini , Oleksandr Tyshchenko , Richard Henderson , Matt Turner , Thomas Bogendoerfer , "James E.J. Bottomley" , Helge Deller , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Geoff Levand , "David S. Miller" , Andreas Larsson , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" Cc: iommu@lists.linux.dev, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, xen-devel@lists.xenproject.org, linux-alpha@vger.kernel.org, linux-mips@vger.kernel.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, sparclinux@vger.kernel.org Subject: [PATCH v5 11/14] sparc: Use physical address DMA mapping Date: Wed, 15 Oct 2025 12:12:57 +0300 Message-ID: <20251015-remove-map-page-v5-11-3bbfe3a25cdf@kernel.org> X-Mailer: git-send-email 2.51.0 In-Reply-To: <20251015-remove-map-page-v5-0-3bbfe3a25cdf@kernel.org> References: <20251015-remove-map-page-v5-0-3bbfe3a25cdf@kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" X-Mailer: b4 0.15-dev Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @kernel.org) X-ZM-MESSAGEID: 1760519787552154100 From: Leon Romanovsky Convert sparc architecture DMA code to use .map_phys callback. Signed-off-by: Leon Romanovsky --- arch/sparc/kernel/iommu.c | 30 +++++++++++++++++----------- arch/sparc/kernel/pci_sun4v.c | 31 ++++++++++++++++++----------- arch/sparc/mm/io-unit.c | 38 ++++++++++++++++++----------------- arch/sparc/mm/iommu.c | 46 ++++++++++++++++++++++-----------------= ---- 4 files changed, 82 insertions(+), 63 deletions(-) diff --git a/arch/sparc/kernel/iommu.c b/arch/sparc/kernel/iommu.c index da0363692528..46ef88bc9c26 100644 --- a/arch/sparc/kernel/iommu.c +++ b/arch/sparc/kernel/iommu.c @@ -260,26 +260,35 @@ static void dma_4u_free_coherent(struct device *dev, = size_t size, free_pages((unsigned long)cpu, order); } =20 -static dma_addr_t dma_4u_map_page(struct device *dev, struct page *page, - unsigned long offset, size_t sz, - enum dma_data_direction direction, +static dma_addr_t dma_4u_map_phys(struct device *dev, phys_addr_t phys, + size_t sz, enum dma_data_direction direction, unsigned long attrs) { struct iommu *iommu; struct strbuf *strbuf; iopte_t *base; unsigned long flags, npages, oaddr; - unsigned long i, base_paddr, ctx; + unsigned long i, ctx; u32 bus_addr, ret; unsigned long iopte_protection; =20 + if (unlikely(attrs & DMA_ATTR_MMIO)) + /* + * This check is included because older versions of the code + * lacked MMIO path support, and my ability to test this path + * is limited. However, from a software technical standpoint, + * there is no restriction, as the following code operates + * solely on physical addresses. + */ + goto bad_no_ctx; + iommu =3D dev->archdata.iommu; strbuf =3D dev->archdata.stc; =20 if (unlikely(direction =3D=3D DMA_NONE)) goto bad_no_ctx; =20 - oaddr =3D (unsigned long)(page_address(page) + offset); + oaddr =3D (unsigned long)(phys_to_virt(phys)); npages =3D IO_PAGE_ALIGN(oaddr + sz) - (oaddr & IO_PAGE_MASK); npages >>=3D IO_PAGE_SHIFT; =20 @@ -296,7 +305,6 @@ static dma_addr_t dma_4u_map_page(struct device *dev, s= truct page *page, bus_addr =3D (iommu->tbl.table_map_base + ((base - iommu->page_table) << IO_PAGE_SHIFT)); ret =3D bus_addr | (oaddr & ~IO_PAGE_MASK); - base_paddr =3D __pa(oaddr & IO_PAGE_MASK); if (strbuf->strbuf_enabled) iopte_protection =3D IOPTE_STREAMING(ctx); else @@ -304,8 +312,8 @@ static dma_addr_t dma_4u_map_page(struct device *dev, s= truct page *page, if (direction !=3D DMA_TO_DEVICE) iopte_protection |=3D IOPTE_WRITE; =20 - for (i =3D 0; i < npages; i++, base++, base_paddr +=3D IO_PAGE_SIZE) - iopte_val(*base) =3D iopte_protection | base_paddr; + for (i =3D 0; i < npages; i++, base++, phys +=3D IO_PAGE_SIZE) + iopte_val(*base) =3D iopte_protection | phys; =20 return ret; =20 @@ -383,7 +391,7 @@ static void strbuf_flush(struct strbuf *strbuf, struct = iommu *iommu, vaddr, ctx, npages); } =20 -static void dma_4u_unmap_page(struct device *dev, dma_addr_t bus_addr, +static void dma_4u_unmap_phys(struct device *dev, dma_addr_t bus_addr, size_t sz, enum dma_data_direction direction, unsigned long attrs) { @@ -753,8 +761,8 @@ static int dma_4u_supported(struct device *dev, u64 dev= ice_mask) static const struct dma_map_ops sun4u_dma_ops =3D { .alloc =3D dma_4u_alloc_coherent, .free =3D dma_4u_free_coherent, - .map_page =3D dma_4u_map_page, - .unmap_page =3D dma_4u_unmap_page, + .map_phys =3D dma_4u_map_phys, + .unmap_phys =3D dma_4u_unmap_phys, .map_sg =3D dma_4u_map_sg, .unmap_sg =3D dma_4u_unmap_sg, .sync_single_for_cpu =3D dma_4u_sync_single_for_cpu, diff --git a/arch/sparc/kernel/pci_sun4v.c b/arch/sparc/kernel/pci_sun4v.c index b720b21ccfbd..791f0a76665f 100644 --- a/arch/sparc/kernel/pci_sun4v.c +++ b/arch/sparc/kernel/pci_sun4v.c @@ -352,9 +352,8 @@ static void dma_4v_free_coherent(struct device *dev, si= ze_t size, void *cpu, free_pages((unsigned long)cpu, order); } =20 -static dma_addr_t dma_4v_map_page(struct device *dev, struct page *page, - unsigned long offset, size_t sz, - enum dma_data_direction direction, +static dma_addr_t dma_4v_map_phys(struct device *dev, phys_addr_t phys, + size_t sz, enum dma_data_direction direction, unsigned long attrs) { struct iommu *iommu; @@ -362,18 +361,27 @@ static dma_addr_t dma_4v_map_page(struct device *dev,= struct page *page, struct iommu_map_table *tbl; u64 mask; unsigned long flags, npages, oaddr; - unsigned long i, base_paddr; - unsigned long prot; + unsigned long i, prot; dma_addr_t bus_addr, ret; long entry; =20 + if (unlikely(attrs & DMA_ATTR_MMIO)) + /* + * This check is included because older versions of the code + * lacked MMIO path support, and my ability to test this path + * is limited. However, from a software technical standpoint, + * there is no restriction, as the following code operates + * solely on physical addresses. + */ + goto bad; + iommu =3D dev->archdata.iommu; atu =3D iommu->atu; =20 if (unlikely(direction =3D=3D DMA_NONE)) goto bad; =20 - oaddr =3D (unsigned long)(page_address(page) + offset); + oaddr =3D (unsigned long)(phys_to_virt(phys)); npages =3D IO_PAGE_ALIGN(oaddr + sz) - (oaddr & IO_PAGE_MASK); npages >>=3D IO_PAGE_SHIFT; =20 @@ -391,7 +399,6 @@ static dma_addr_t dma_4v_map_page(struct device *dev, s= truct page *page, =20 bus_addr =3D (tbl->table_map_base + (entry << IO_PAGE_SHIFT)); ret =3D bus_addr | (oaddr & ~IO_PAGE_MASK); - base_paddr =3D __pa(oaddr & IO_PAGE_MASK); prot =3D HV_PCI_MAP_ATTR_READ; if (direction !=3D DMA_TO_DEVICE) prot |=3D HV_PCI_MAP_ATTR_WRITE; @@ -403,8 +410,8 @@ static dma_addr_t dma_4v_map_page(struct device *dev, s= truct page *page, =20 iommu_batch_start(dev, prot, entry); =20 - for (i =3D 0; i < npages; i++, base_paddr +=3D IO_PAGE_SIZE) { - long err =3D iommu_batch_add(base_paddr, mask); + for (i =3D 0; i < npages; i++, phys +=3D IO_PAGE_SIZE) { + long err =3D iommu_batch_add(phys, mask); if (unlikely(err < 0L)) goto iommu_map_fail; } @@ -426,7 +433,7 @@ static dma_addr_t dma_4v_map_page(struct device *dev, s= truct page *page, return DMA_MAPPING_ERROR; } =20 -static void dma_4v_unmap_page(struct device *dev, dma_addr_t bus_addr, +static void dma_4v_unmap_phys(struct device *dev, dma_addr_t bus_addr, size_t sz, enum dma_data_direction direction, unsigned long attrs) { @@ -686,8 +693,8 @@ static int dma_4v_supported(struct device *dev, u64 dev= ice_mask) static const struct dma_map_ops sun4v_dma_ops =3D { .alloc =3D dma_4v_alloc_coherent, .free =3D dma_4v_free_coherent, - .map_page =3D dma_4v_map_page, - .unmap_page =3D dma_4v_unmap_page, + .map_phys =3D dma_4v_map_phys, + .unmap_phys =3D dma_4v_unmap_phys, .map_sg =3D dma_4v_map_sg, .unmap_sg =3D dma_4v_unmap_sg, .dma_supported =3D dma_4v_supported, diff --git a/arch/sparc/mm/io-unit.c b/arch/sparc/mm/io-unit.c index d8376f61b4d0..d409cb450de4 100644 --- a/arch/sparc/mm/io-unit.c +++ b/arch/sparc/mm/io-unit.c @@ -94,13 +94,14 @@ static int __init iounit_init(void) subsys_initcall(iounit_init); =20 /* One has to hold iounit->lock to call this */ -static unsigned long iounit_get_area(struct iounit_struct *iounit, unsigne= d long vaddr, int size) +static dma_addr_t iounit_get_area(struct iounit_struct *iounit, + phys_addr_t phys, int size) { int i, j, k, npages; unsigned long rotor, scan, limit; iopte_t iopte; =20 - npages =3D ((vaddr & ~PAGE_MASK) + size + (PAGE_SIZE-1)) >> PAGE_S= HIFT; + npages =3D (offset_in_page(phys) + size + (PAGE_SIZE - 1)) >> PAGE_SHIFT; =20 /* A tiny bit of magic ingredience :) */ switch (npages) { @@ -109,7 +110,7 @@ static unsigned long iounit_get_area(struct iounit_stru= ct *iounit, unsigned long default: i =3D 0x0213; break; } =09 - IOD(("iounit_get_area(%08lx,%d[%d])=3D", vaddr, size, npages)); + IOD(("%s(%pa,%d[%d])=3D", __func__, &phys, size, npages)); =09 next: j =3D (i & 15); rotor =3D iounit->rotor[j - 1]; @@ -124,7 +125,8 @@ nexti: scan =3D find_next_zero_bit(iounit->bmap, limit,= scan); } i >>=3D 4; if (!(i & 15)) - panic("iounit_get_area: Couldn't find free iopte slots for (%08lx,%d)\n= ", vaddr, size); + panic("iounit_get_area: Couldn't find free iopte slots for (%pa,%d)\n", + &phys, size); goto next; } for (k =3D 1, scan++; k < npages; k++) @@ -132,30 +134,29 @@ nexti: scan =3D find_next_zero_bit(iounit->bmap, limi= t, scan); goto nexti; iounit->rotor[j - 1] =3D (scan < limit) ? scan : iounit->limit[j - 1]; scan -=3D npages; - iopte =3D MKIOPTE(__pa(vaddr & PAGE_MASK)); - vaddr =3D IOUNIT_DMA_BASE + (scan << PAGE_SHIFT) + (vaddr & ~PAGE_MASK); + iopte =3D MKIOPTE(phys & PAGE_MASK); + phys =3D IOUNIT_DMA_BASE + (scan << PAGE_SHIFT) + offset_in_page(phys); for (k =3D 0; k < npages; k++, iopte =3D __iopte(iopte_val(iopte) + 0x100= ), scan++) { set_bit(scan, iounit->bmap); sbus_writel(iopte_val(iopte), &iounit->page_table[scan]); } - IOD(("%08lx\n", vaddr)); - return vaddr; + IOD(("%pa\n", &phys)); + return phys; } =20 -static dma_addr_t iounit_map_page(struct device *dev, struct page *page, - unsigned long offset, size_t len, enum dma_data_direction dir, - unsigned long attrs) +static dma_addr_t iounit_map_phys(struct device *dev, phys_addr_t phys, + size_t len, enum dma_data_direction dir, unsigned long attrs) { - void *vaddr =3D page_address(page) + offset; struct iounit_struct *iounit =3D dev->archdata.iommu; - unsigned long ret, flags; + unsigned long flags; + dma_addr_t ret; =09 /* XXX So what is maxphys for us and how do drivers know it? */ if (!len || len > 256 * 1024) return DMA_MAPPING_ERROR; =20 spin_lock_irqsave(&iounit->lock, flags); - ret =3D iounit_get_area(iounit, (unsigned long)vaddr, len); + ret =3D iounit_get_area(iounit, phys, len); spin_unlock_irqrestore(&iounit->lock, flags); return ret; } @@ -171,14 +172,15 @@ static int iounit_map_sg(struct device *dev, struct s= catterlist *sgl, int nents, /* FIXME: Cache some resolved pages - often several sg entries are to the= same page */ spin_lock_irqsave(&iounit->lock, flags); for_each_sg(sgl, sg, nents, i) { - sg->dma_address =3D iounit_get_area(iounit, (unsigned long) sg_virt(sg),= sg->length); + sg->dma_address =3D + iounit_get_area(iounit, sg_phys(sg), sg->length); sg->dma_length =3D sg->length; } spin_unlock_irqrestore(&iounit->lock, flags); return nents; } =20 -static void iounit_unmap_page(struct device *dev, dma_addr_t vaddr, size_t= len, +static void iounit_unmap_phys(struct device *dev, dma_addr_t vaddr, size_t= len, enum dma_data_direction dir, unsigned long attrs) { struct iounit_struct *iounit =3D dev->archdata.iommu; @@ -279,8 +281,8 @@ static const struct dma_map_ops iounit_dma_ops =3D { .alloc =3D iounit_alloc, .free =3D iounit_free, #endif - .map_page =3D iounit_map_page, - .unmap_page =3D iounit_unmap_page, + .map_phys =3D iounit_map_phys, + .unmap_phys =3D iounit_unmap_phys, .map_sg =3D iounit_map_sg, .unmap_sg =3D iounit_unmap_sg, }; diff --git a/arch/sparc/mm/iommu.c b/arch/sparc/mm/iommu.c index 5a5080db800f..f48adf62724a 100644 --- a/arch/sparc/mm/iommu.c +++ b/arch/sparc/mm/iommu.c @@ -181,18 +181,20 @@ static void iommu_flush_iotlb(iopte_t *iopte, unsigne= d int niopte) } } =20 -static dma_addr_t __sbus_iommu_map_page(struct device *dev, struct page *p= age, - unsigned long offset, size_t len, bool per_page_flush) +static dma_addr_t __sbus_iommu_map_phys(struct device *dev, phys_addr_t pa= ddr, + size_t len, bool per_page_flush, unsigned long attrs) { struct iommu_struct *iommu =3D dev->archdata.iommu; - phys_addr_t paddr =3D page_to_phys(page) + offset; - unsigned long off =3D paddr & ~PAGE_MASK; + unsigned long off =3D offset_in_page(paddr); unsigned long npages =3D (off + len + PAGE_SIZE - 1) >> PAGE_SHIFT; unsigned long pfn =3D __phys_to_pfn(paddr); unsigned int busa, busa0; iopte_t *iopte, *iopte0; int ioptex, i; =20 + if (unlikely(attrs & DMA_ATTR_MMIO)) + return DMA_MAPPING_ERROR; + /* XXX So what is maxphys for us and how do drivers know it? */ if (!len || len > 256 * 1024) return DMA_MAPPING_ERROR; @@ -202,10 +204,10 @@ static dma_addr_t __sbus_iommu_map_page(struct device= *dev, struct page *page, * XXX Is this a good assumption? * XXX What if someone else unmaps it here and races us? */ - if (per_page_flush && !PageHighMem(page)) { + if (per_page_flush && !PhysHighMem(paddr)) { unsigned long vaddr, p; =20 - vaddr =3D (unsigned long)page_address(page) + offset; + vaddr =3D (unsigned long)phys_to_virt(paddr); for (p =3D vaddr & PAGE_MASK; p < vaddr + len; p +=3D PAGE_SIZE) flush_page_for_dma(p); } @@ -231,19 +233,19 @@ static dma_addr_t __sbus_iommu_map_page(struct device= *dev, struct page *page, return busa0 + off; } =20 -static dma_addr_t sbus_iommu_map_page_gflush(struct device *dev, - struct page *page, unsigned long offset, size_t len, - enum dma_data_direction dir, unsigned long attrs) +static dma_addr_t sbus_iommu_map_phys_gflush(struct device *dev, + phys_addr_t phys, size_t len, enum dma_data_direction dir, + unsigned long attrs) { flush_page_for_dma(0); - return __sbus_iommu_map_page(dev, page, offset, len, false); + return __sbus_iommu_map_phys(dev, phys, len, false, attrs); } =20 -static dma_addr_t sbus_iommu_map_page_pflush(struct device *dev, - struct page *page, unsigned long offset, size_t len, - enum dma_data_direction dir, unsigned long attrs) +static dma_addr_t sbus_iommu_map_phys_pflush(struct device *dev, + phys_addr_t phys, size_t len, enum dma_data_direction dir, + unsigned long attrs) { - return __sbus_iommu_map_page(dev, page, offset, len, true); + return __sbus_iommu_map_phys(dev, phys, len, true, attrs); } =20 static int __sbus_iommu_map_sg(struct device *dev, struct scatterlist *sgl, @@ -254,8 +256,8 @@ static int __sbus_iommu_map_sg(struct device *dev, stru= ct scatterlist *sgl, int j; =20 for_each_sg(sgl, sg, nents, j) { - sg->dma_address =3D__sbus_iommu_map_page(dev, sg_page(sg), - sg->offset, sg->length, per_page_flush); + sg->dma_address =3D __sbus_iommu_map_phys(dev, sg_phys(sg), + sg->length, per_page_flush, attrs); if (sg->dma_address =3D=3D DMA_MAPPING_ERROR) return -EIO; sg->dma_length =3D sg->length; @@ -277,7 +279,7 @@ static int sbus_iommu_map_sg_pflush(struct device *dev,= struct scatterlist *sgl, return __sbus_iommu_map_sg(dev, sgl, nents, dir, attrs, true); } =20 -static void sbus_iommu_unmap_page(struct device *dev, dma_addr_t dma_addr, +static void sbus_iommu_unmap_phys(struct device *dev, dma_addr_t dma_addr, size_t len, enum dma_data_direction dir, unsigned long attrs) { struct iommu_struct *iommu =3D dev->archdata.iommu; @@ -303,7 +305,7 @@ static void sbus_iommu_unmap_sg(struct device *dev, str= uct scatterlist *sgl, int i; =20 for_each_sg(sgl, sg, nents, i) { - sbus_iommu_unmap_page(dev, sg->dma_address, sg->length, dir, + sbus_iommu_unmap_phys(dev, sg->dma_address, sg->length, dir, attrs); sg->dma_address =3D 0x21212121; } @@ -426,8 +428,8 @@ static const struct dma_map_ops sbus_iommu_dma_gflush_o= ps =3D { .alloc =3D sbus_iommu_alloc, .free =3D sbus_iommu_free, #endif - .map_page =3D sbus_iommu_map_page_gflush, - .unmap_page =3D sbus_iommu_unmap_page, + .map_phys =3D sbus_iommu_map_phys_gflush, + .unmap_phys =3D sbus_iommu_unmap_phys, .map_sg =3D sbus_iommu_map_sg_gflush, .unmap_sg =3D sbus_iommu_unmap_sg, }; @@ -437,8 +439,8 @@ static const struct dma_map_ops sbus_iommu_dma_pflush_o= ps =3D { .alloc =3D sbus_iommu_alloc, .free =3D sbus_iommu_free, #endif - .map_page =3D sbus_iommu_map_page_pflush, - .unmap_page =3D sbus_iommu_unmap_page, + .map_phys =3D sbus_iommu_map_phys_pflush, + .unmap_phys =3D sbus_iommu_unmap_phys, .map_sg =3D sbus_iommu_map_sg_pflush, .unmap_sg =3D sbus_iommu_unmap_sg, }; --=20 2.51.0