From nobody Thu Oct 30 22:49:04 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=quarantine dis=none) header.from=kernel.org ARC-Seal: i=1; a=rsa-sha256; t=1760519796; cv=none; d=zohomail.com; s=zohoarc; b=PY7CUfe1Cbmbqt5qnzRRxsQX0l3PgNcntZxJKiJuUgh+Z+zYIXmogZhcbCzB6YgdYFRQ8OZtQJyOlE0Rej3x2/igB1MI2d4SpJdLj0kvLeKm+ynvoTMLnpQLyAXFo+hGOTC4oST+VjIGPhL7zfpa0lhpEf3YyfVdzVd6BlUZMOU= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1760519796; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=/sMLccLPuipTIQIq+QGkG1/Qfpgp7OHUnhyOy5PjXZo=; b=gH0xGCroGLLb3PwyRcAugAycXeYCpqKPHG4Wv5H9j7igJE+ZfSCzSjgNvJ/kpVXvys2G1HkSjfWAVJLLEQs2YSURFzYIUEgV6Nm/B7GOamW8UnjghP9tv1nUX6ycJ2DYDftDqUktyhaUaTt216hrkym42n1vpcAKuX/adBgWP00= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=quarantine dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1760519796356577.4668249451637; Wed, 15 Oct 2025 02:16:36 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.1143474.1477229 (Exim 4.92) (envelope-from ) id 1v8xcL-0000Cu-JQ; Wed, 15 Oct 2025 09:16:21 +0000 Received: by outflank-mailman (output) from mailman id 1143474.1477229; Wed, 15 Oct 2025 09:16:21 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1v8xcL-0000C1-BP; Wed, 15 Oct 2025 09:16:21 +0000 Received: by outflank-mailman (input) for mailman id 1143474; Wed, 15 Oct 2025 09:16:19 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1v8xa1-0002lR-IV for xen-devel@lists.xenproject.org; Wed, 15 Oct 2025 09:13:57 +0000 Received: from sea.source.kernel.org (sea.source.kernel.org [2600:3c0a:e001:78e:0:1991:8:25]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id 464604fa-a9a7-11f0-9d15-b5c5bf9af7f9; Wed, 15 Oct 2025 11:13:56 +0200 (CEST) Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sea.source.kernel.org (Postfix) with ESMTP id D02CE417E5; Wed, 15 Oct 2025 09:13:54 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0CE55C19424; Wed, 15 Oct 2025 09:13:54 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 464604fa-a9a7-11f0-9d15-b5c5bf9af7f9 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1760519634; bh=SfEJhe+EnkJnefkh97fPmSPXkeIOd3xlETD9YgBQ46Y=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=VjKEdt2mxlWT6w6IHTqoZRpMLcMuI77UbOT5L24qqSlM5/DRGv3AcjxM4yMGcf+EA z3mSEYZ6gFFWrxSO7HRPPBIgkFJ2KK8npgJULFQ3REwzcltzHG0visRTUD4RkVR8p2 iVTzUpaitF9orJXdg7vwdxwmxOVT299ieYtlvyB/8fvGgyB/AQShYv0rnCdNMAXe9R ZB3OwN86eHCPctYuE8Jsplg/jEev2Xxs2kFERH9JQ4NIoTovdwTglFTYtRMQcmdBAV BE5Q3ZMOHIBzXlABIk4k32PRk815Smbd9I+JsegH2figO4R/XC82Wkl31Ze4zN26de GcSuYHdBLk/jA== From: Leon Romanovsky To: Marek Szyprowski , Robin Murphy , Russell King , Juergen Gross , Stefano Stabellini , Oleksandr Tyshchenko , Richard Henderson , Matt Turner , Thomas Bogendoerfer , "James E.J. Bottomley" , Helge Deller , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Geoff Levand , "David S. Miller" , Andreas Larsson , Thomas Gleixner , Ingo Molnar , Borislav Petkov , Dave Hansen , x86@kernel.org, "H. Peter Anvin" Cc: iommu@lists.linux.dev, linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, xen-devel@lists.xenproject.org, linux-alpha@vger.kernel.org, linux-mips@vger.kernel.org, linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, sparclinux@vger.kernel.org, Jason Gunthorpe , Jason Gunthorpe Subject: [PATCH v5 09/14] parisc: Convert DMA map_page to map_phys interface Date: Wed, 15 Oct 2025 12:12:55 +0300 Message-ID: <20251015-remove-map-page-v5-9-3bbfe3a25cdf@kernel.org> X-Mailer: git-send-email 2.51.0 In-Reply-To: <20251015-remove-map-page-v5-0-3bbfe3a25cdf@kernel.org> References: <20251015-remove-map-page-v5-0-3bbfe3a25cdf@kernel.org> MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" X-Mailer: b4 0.15-dev Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @kernel.org) X-ZM-MESSAGEID: 1760519797403158500 From: Leon Romanovsky Perform mechanical conversion from .map_page to .map_phys callback. Reviewed-by: Jason Gunthorpe Signed-off-by: Leon Romanovsky --- drivers/parisc/ccio-dma.c | 54 ++++++++++++++++++++++----------------= ---- drivers/parisc/iommu-helpers.h | 10 ++++---- drivers/parisc/sba_iommu.c | 54 ++++++++++++++++++++------------------= ---- 3 files changed, 59 insertions(+), 59 deletions(-) diff --git a/drivers/parisc/ccio-dma.c b/drivers/parisc/ccio-dma.c index feef537257d0..4e7071714356 100644 --- a/drivers/parisc/ccio-dma.c +++ b/drivers/parisc/ccio-dma.c @@ -517,10 +517,10 @@ static u32 hint_lookup[] =3D { * ccio_io_pdir_entry - Initialize an I/O Pdir. * @pdir_ptr: A pointer into I/O Pdir. * @sid: The Space Identifier. - * @vba: The virtual address. + * @pba: The physical address. * @hints: The DMA Hint. * - * Given a virtual address (vba, arg2) and space id, (sid, arg1), + * Given a physical address (pba, arg2) and space id, (sid, arg1), * load the I/O PDIR entry pointed to by pdir_ptr (arg0). Each IO Pdir * entry consists of 8 bytes as shown below (MSB =3D=3D bit 0): * @@ -543,7 +543,7 @@ static u32 hint_lookup[] =3D { * index are bits 12:19 of the value returned by LCI. */=20 static void -ccio_io_pdir_entry(__le64 *pdir_ptr, space_t sid, unsigned long vba, +ccio_io_pdir_entry(__le64 *pdir_ptr, space_t sid, phys_addr_t pba, unsigned long hints) { register unsigned long pa; @@ -557,7 +557,7 @@ ccio_io_pdir_entry(__le64 *pdir_ptr, space_t sid, unsig= ned long vba, ** "hints" parm includes the VALID bit! ** "dep" clobbers the physical address offset bits as well. */ - pa =3D lpa(vba); + pa =3D pba; asm volatile("depw %1,31,12,%0" : "+r" (pa) : "r" (hints)); ((u32 *)pdir_ptr)[1] =3D (u32) pa; =20 @@ -582,7 +582,7 @@ ccio_io_pdir_entry(__le64 *pdir_ptr, space_t sid, unsig= ned long vba, ** Grab virtual index [0:11] ** Deposit virt_idx bits into I/O PDIR word */ - asm volatile ("lci %%r0(%1), %0" : "=3Dr" (ci) : "r" (vba)); + asm volatile ("lci %%r0(%1), %0" : "=3Dr" (ci) : "r" (phys_to_virt(pba))); asm volatile ("extru %1,19,12,%0" : "+r" (ci) : "r" (ci)); asm volatile ("depw %1,15,12,%0" : "+r" (pa) : "r" (ci)); =20 @@ -704,14 +704,14 @@ ccio_dma_supported(struct device *dev, u64 mask) /** * ccio_map_single - Map an address range into the IOMMU. * @dev: The PCI device. - * @addr: The start address of the DMA region. + * @addr: The physical address of the DMA region. * @size: The length of the DMA region. * @direction: The direction of the DMA transaction (to/from device). * * This function implements the pci_map_single function. */ static dma_addr_t=20 -ccio_map_single(struct device *dev, void *addr, size_t size, +ccio_map_single(struct device *dev, phys_addr_t addr, size_t size, enum dma_data_direction direction) { int idx; @@ -730,7 +730,7 @@ ccio_map_single(struct device *dev, void *addr, size_t = size, BUG_ON(size <=3D 0); =20 /* save offset bits */ - offset =3D ((unsigned long) addr) & ~IOVP_MASK; + offset =3D offset_in_page(addr); =20 /* round up to nearest IOVP_SIZE */ size =3D ALIGN(size + offset, IOVP_SIZE); @@ -746,15 +746,15 @@ ccio_map_single(struct device *dev, void *addr, size_= t size, =20 pdir_start =3D &(ioc->pdir_base[idx]); =20 - DBG_RUN("%s() %px -> %#lx size: %zu\n", - __func__, addr, (long)(iovp | offset), size); + DBG_RUN("%s() %pa -> %#lx size: %zu\n", + __func__, &addr, (long)(iovp | offset), size); =20 /* If not cacheline aligned, force SAFE_DMA on the whole mess */ - if((size % L1_CACHE_BYTES) || ((unsigned long)addr % L1_CACHE_BYTES)) + if ((size % L1_CACHE_BYTES) || (addr % L1_CACHE_BYTES)) hint |=3D HINT_SAFE_DMA; =20 while(size > 0) { - ccio_io_pdir_entry(pdir_start, KERNEL_SPACE, (unsigned long)addr, hint); + ccio_io_pdir_entry(pdir_start, KERNEL_SPACE, addr, hint); =20 DBG_RUN(" pdir %p %08x%08x\n", pdir_start, @@ -773,17 +773,18 @@ ccio_map_single(struct device *dev, void *addr, size_= t size, =20 =20 static dma_addr_t -ccio_map_page(struct device *dev, struct page *page, unsigned long offset, - size_t size, enum dma_data_direction direction, - unsigned long attrs) +ccio_map_phys(struct device *dev, phys_addr_t phys, size_t size, + enum dma_data_direction direction, unsigned long attrs) { - return ccio_map_single(dev, page_address(page) + offset, size, - direction); + if (unlikely(attrs & DMA_ATTR_MMIO)) + return DMA_MAPPING_ERROR; + + return ccio_map_single(dev, phys, size, direction); } =20 =20 /** - * ccio_unmap_page - Unmap an address range from the IOMMU. + * ccio_unmap_phys - Unmap an address range from the IOMMU. * @dev: The PCI device. * @iova: The start address of the DMA region. * @size: The length of the DMA region. @@ -791,7 +792,7 @@ ccio_map_page(struct device *dev, struct page *page, un= signed long offset, * @attrs: attributes */ static void=20 -ccio_unmap_page(struct device *dev, dma_addr_t iova, size_t size, +ccio_unmap_phys(struct device *dev, dma_addr_t iova, size_t size, enum dma_data_direction direction, unsigned long attrs) { struct ioc *ioc; @@ -853,7 +854,8 @@ ccio_alloc(struct device *dev, size_t size, dma_addr_t = *dma_handle, gfp_t flag, =20 if (ret) { memset(ret, 0, size); - *dma_handle =3D ccio_map_single(dev, ret, size, DMA_BIDIRECTIONAL); + *dma_handle =3D ccio_map_single(dev, virt_to_phys(ret), size, + DMA_BIDIRECTIONAL); } =20 return ret; @@ -873,7 +875,7 @@ static void ccio_free(struct device *dev, size_t size, void *cpu_addr, dma_addr_t dma_handle, unsigned long attrs) { - ccio_unmap_page(dev, dma_handle, size, 0, 0); + ccio_unmap_phys(dev, dma_handle, size, 0, 0); free_pages((unsigned long)cpu_addr, get_order(size)); } =20 @@ -920,7 +922,7 @@ ccio_map_sg(struct device *dev, struct scatterlist *sgl= ist, int nents, /* Fast path single entry scatterlists. */ if (nents =3D=3D 1) { sg_dma_address(sglist) =3D ccio_map_single(dev, - sg_virt(sglist), sglist->length, + sg_phys(sglist), sglist->length, direction); sg_dma_len(sglist) =3D sglist->length; return 1; @@ -1004,7 +1006,7 @@ ccio_unmap_sg(struct device *dev, struct scatterlist = *sglist, int nents, #ifdef CCIO_COLLECT_STATS ioc->usg_pages +=3D sg_dma_len(sglist) >> PAGE_SHIFT; #endif - ccio_unmap_page(dev, sg_dma_address(sglist), + ccio_unmap_phys(dev, sg_dma_address(sglist), sg_dma_len(sglist), direction, 0); ++sglist; nents--; @@ -1017,8 +1019,8 @@ static const struct dma_map_ops ccio_ops =3D { .dma_supported =3D ccio_dma_supported, .alloc =3D ccio_alloc, .free =3D ccio_free, - .map_page =3D ccio_map_page, - .unmap_page =3D ccio_unmap_page, + .map_phys =3D ccio_map_phys, + .unmap_phys =3D ccio_unmap_phys, .map_sg =3D ccio_map_sg, .unmap_sg =3D ccio_unmap_sg, .get_sgtable =3D dma_common_get_sgtable, @@ -1072,7 +1074,7 @@ static int ccio_proc_info(struct seq_file *m, void *p) ioc->msingle_calls, ioc->msingle_pages, (int)((ioc->msingle_pages * 1000)/ioc->msingle_calls)); =20 - /* KLUGE - unmap_sg calls unmap_page for each mapped page */ + /* KLUGE - unmap_sg calls unmap_phys for each mapped page */ min =3D ioc->usingle_calls - ioc->usg_calls; max =3D ioc->usingle_pages - ioc->usg_pages; seq_printf(m, "pci_unmap_single: %8ld calls %8ld pages (avg %d/1000)\n", diff --git a/drivers/parisc/iommu-helpers.h b/drivers/parisc/iommu-helpers.h index c43f1a212a5c..0691884f5095 100644 --- a/drivers/parisc/iommu-helpers.h +++ b/drivers/parisc/iommu-helpers.h @@ -14,7 +14,7 @@ static inline unsigned int iommu_fill_pdir(struct ioc *ioc, struct scatterlist *startsg, int nents,=20 unsigned long hint, - void (*iommu_io_pdir_entry)(__le64 *, space_t, unsigned long, + void (*iommu_io_pdir_entry)(__le64 *, space_t, phys_addr_t, unsigned long)) { struct scatterlist *dma_sg =3D startsg; /* pointer to current DMA */ @@ -28,7 +28,7 @@ iommu_fill_pdir(struct ioc *ioc, struct scatterlist *star= tsg, int nents, dma_sg--; =20 while (nents-- > 0) { - unsigned long vaddr; + phys_addr_t paddr; long size; =20 DBG_RUN_SG(" %d : %08lx %p/%05x\n", nents, @@ -67,7 +67,7 @@ iommu_fill_pdir(struct ioc *ioc, struct scatterlist *star= tsg, int nents, =09 BUG_ON(pdirp =3D=3D NULL); =09 - vaddr =3D (unsigned long)sg_virt(startsg); + paddr =3D sg_phys(startsg); sg_dma_len(dma_sg) +=3D startsg->length; size =3D startsg->length + dma_offset; dma_offset =3D 0; @@ -76,8 +76,8 @@ iommu_fill_pdir(struct ioc *ioc, struct scatterlist *star= tsg, int nents, #endif do { iommu_io_pdir_entry(pdirp, KERNEL_SPACE,=20 - vaddr, hint); - vaddr +=3D IOVP_SIZE; + paddr, hint); + paddr +=3D IOVP_SIZE; size -=3D IOVP_SIZE; pdirp++; } while(unlikely(size > 0)); diff --git a/drivers/parisc/sba_iommu.c b/drivers/parisc/sba_iommu.c index fc3863c09f83..a6eb6bffa5ea 100644 --- a/drivers/parisc/sba_iommu.c +++ b/drivers/parisc/sba_iommu.c @@ -532,7 +532,7 @@ typedef unsigned long space_t; * sba_io_pdir_entry - fill in one IO PDIR entry * @pdir_ptr: pointer to IO PDIR entry * @sid: process Space ID - currently only support KERNEL_SPACE - * @vba: Virtual CPU address of buffer to map + * @pba: Physical address of buffer to map * @hint: DMA hint set to use for this mapping * * SBA Mapping Routine @@ -569,20 +569,17 @@ typedef unsigned long space_t; */ =20 static void -sba_io_pdir_entry(__le64 *pdir_ptr, space_t sid, unsigned long vba, +sba_io_pdir_entry(__le64 *pdir_ptr, space_t sid, phys_addr_t pba, unsigned long hint) { - u64 pa; /* physical address */ register unsigned ci; /* coherent index */ =20 - pa =3D lpa(vba); - pa &=3D IOVP_MASK; + asm("lci 0(%1), %0" : "=3Dr" (ci) : "r" (phys_to_virt(pba))); + pba &=3D IOVP_MASK; + pba |=3D (ci >> PAGE_SHIFT) & 0xff; /* move CI (8 bits) into lowest byte= */ =20 - asm("lci 0(%1), %0" : "=3Dr" (ci) : "r" (vba)); - pa |=3D (ci >> PAGE_SHIFT) & 0xff; /* move CI (8 bits) into lowest byte = */ - - pa |=3D SBA_PDIR_VALID_BIT; /* set "valid" bit */ - *pdir_ptr =3D cpu_to_le64(pa); /* swap and store into I/O Pdir */ + pba |=3D SBA_PDIR_VALID_BIT; /* set "valid" bit */ + *pdir_ptr =3D cpu_to_le64(pba); /* swap and store into I/O Pdir */ =20 /* * If the PDC_MODEL capabilities has Non-coherent IO-PDIR bit set @@ -707,7 +704,7 @@ static int sba_dma_supported( struct device *dev, u64 m= ask) * See Documentation/core-api/dma-api-howto.rst */ static dma_addr_t -sba_map_single(struct device *dev, void *addr, size_t size, +sba_map_single(struct device *dev, phys_addr_t addr, size_t size, enum dma_data_direction direction) { struct ioc *ioc; @@ -722,7 +719,7 @@ sba_map_single(struct device *dev, void *addr, size_t s= ize, return DMA_MAPPING_ERROR; =20 /* save offset bits */ - offset =3D ((dma_addr_t) (long) addr) & ~IOVP_MASK; + offset =3D offset_in_page(addr); =20 /* round up to nearest IOVP_SIZE */ size =3D (size + offset + ~IOVP_MASK) & IOVP_MASK; @@ -739,13 +736,13 @@ sba_map_single(struct device *dev, void *addr, size_t= size, pide =3D sba_alloc_range(ioc, dev, size); iovp =3D (dma_addr_t) pide << IOVP_SHIFT; =20 - DBG_RUN("%s() 0x%p -> 0x%lx\n", - __func__, addr, (long) iovp | offset); + DBG_RUN("%s() 0x%pa -> 0x%lx\n", + __func__, &addr, (long) iovp | offset); =20 pdir_start =3D &(ioc->pdir_base[pide]); =20 while (size > 0) { - sba_io_pdir_entry(pdir_start, KERNEL_SPACE, (unsigned long) addr, 0); + sba_io_pdir_entry(pdir_start, KERNEL_SPACE, addr, 0); =20 DBG_RUN(" pdir 0x%p %02x%02x%02x%02x%02x%02x%02x%02x\n", pdir_start, @@ -778,17 +775,18 @@ sba_map_single(struct device *dev, void *addr, size_t= size, =20 =20 static dma_addr_t -sba_map_page(struct device *dev, struct page *page, unsigned long offset, - size_t size, enum dma_data_direction direction, - unsigned long attrs) +sba_map_phys(struct device *dev, phys_addr_t phys, size_t size, + enum dma_data_direction direction, unsigned long attrs) { - return sba_map_single(dev, page_address(page) + offset, size, - direction); + if (unlikely(attrs & DMA_ATTR_MMIO)) + return DMA_MAPPING_ERROR; + + return sba_map_single(dev, phys, size, direction); } =20 =20 /** - * sba_unmap_page - unmap one IOVA and free resources + * sba_unmap_phys - unmap one IOVA and free resources * @dev: instance of PCI owned by the driver that's asking. * @iova: IOVA of driver buffer previously mapped. * @size: number of bytes mapped in driver buffer. @@ -798,7 +796,7 @@ sba_map_page(struct device *dev, struct page *page, uns= igned long offset, * See Documentation/core-api/dma-api-howto.rst */ static void -sba_unmap_page(struct device *dev, dma_addr_t iova, size_t size, +sba_unmap_phys(struct device *dev, dma_addr_t iova, size_t size, enum dma_data_direction direction, unsigned long attrs) { struct ioc *ioc; @@ -893,7 +891,7 @@ static void *sba_alloc(struct device *hwdev, size_t siz= e, dma_addr_t *dma_handle =20 if (ret) { memset(ret, 0, size); - *dma_handle =3D sba_map_single(hwdev, ret, size, 0); + *dma_handle =3D sba_map_single(hwdev, virt_to_phys(ret), size, 0); } =20 return ret; @@ -914,7 +912,7 @@ static void sba_free(struct device *hwdev, size_t size, void *vaddr, dma_addr_t dma_handle, unsigned long attrs) { - sba_unmap_page(hwdev, dma_handle, size, 0, 0); + sba_unmap_phys(hwdev, dma_handle, size, 0, 0); free_pages((unsigned long) vaddr, get_order(size)); } =20 @@ -962,7 +960,7 @@ sba_map_sg(struct device *dev, struct scatterlist *sgli= st, int nents, =20 /* Fast path single entry scatterlists. */ if (nents =3D=3D 1) { - sg_dma_address(sglist) =3D sba_map_single(dev, sg_virt(sglist), + sg_dma_address(sglist) =3D sba_map_single(dev, sg_phys(sglist), sglist->length, direction); sg_dma_len(sglist) =3D sglist->length; return 1; @@ -1061,7 +1059,7 @@ sba_unmap_sg(struct device *dev, struct scatterlist *= sglist, int nents, =20 while (nents && sg_dma_len(sglist)) { =20 - sba_unmap_page(dev, sg_dma_address(sglist), sg_dma_len(sglist), + sba_unmap_phys(dev, sg_dma_address(sglist), sg_dma_len(sglist), direction, 0); #ifdef SBA_COLLECT_STATS ioc->usg_pages +=3D ((sg_dma_address(sglist) & ~IOVP_MASK) + sg_dma_len(= sglist) + IOVP_SIZE - 1) >> PAGE_SHIFT; @@ -1085,8 +1083,8 @@ static const struct dma_map_ops sba_ops =3D { .dma_supported =3D sba_dma_supported, .alloc =3D sba_alloc, .free =3D sba_free, - .map_page =3D sba_map_page, - .unmap_page =3D sba_unmap_page, + .map_phys =3D sba_map_phys, + .unmap_phys =3D sba_unmap_phys, .map_sg =3D sba_map_sg, .unmap_sg =3D sba_unmap_sg, .get_sgtable =3D dma_common_get_sgtable, --=20 2.51.0