From nobody Fri May 15 09:16:20 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 562E8C433EF for ; Thu, 21 Apr 2022 11:37:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1388857AbiDULj5 (ORCPT ); Thu, 21 Apr 2022 07:39:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34264 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1387089AbiDULjz (ORCPT ); Thu, 21 Apr 2022 07:39:55 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 6B4232229E for ; Thu, 21 Apr 2022 04:37:05 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 2221B1480; Thu, 21 Apr 2022 04:37:05 -0700 (PDT) Received: from e121345-lin.cambridge.arm.com (e121345-lin.cambridge.arm.com [10.1.196.40]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 229503F5A1; Thu, 21 Apr 2022 04:37:04 -0700 (PDT) From: Robin Murphy To: hch@lst.de, linux@armlinux.org.uk Cc: linux-arm-kernel@lists.infradead.org, m.szyprowski@samsung.com, arnd@kernel.org, iommu@lists.linux-foundation.org, linux-kernel@vger.kernel.org Subject: [PATCH 1/3] ARM/dma-mapping: Drop .dma_supported for IOMMU ops Date: Thu, 21 Apr 2022 12:36:57 +0100 Message-Id: <708280c132ffc837674d84bb6c165badbbc97d4e.1650539846.git.robin.murphy@arm.com> X-Mailer: git-send-email 2.35.3.dirty In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" When an IOMMU is present, we trust that it should be capable of remapping any physical memory, and since the device masks represent the input (virtual) addresses to the IOMMU it makes no sense to validate them against physical PFNs anyway. Signed-off-by: Robin Murphy --- arch/arm/mm/dma-mapping.c | 23 ----------------------- 1 file changed, 23 deletions(-) diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c index 0f76222cbcbb..6b0095b84a58 100644 --- a/arch/arm/mm/dma-mapping.c +++ b/arch/arm/mm/dma-mapping.c @@ -104,25 +104,6 @@ static struct arm_dma_buffer *arm_dma_buffer_find(void= *virt) * */ =20 -#ifdef CONFIG_ARM_DMA_USE_IOMMU -/* - * Return whether the given device DMA address mask can be supported - * properly. For example, if your device can only drive the low 24-bits - * during bus mastering, then you would pass 0x00ffffff as the mask - * to this function. - */ -static int arm_dma_supported(struct device *dev, u64 mask) -{ - unsigned long max_dma_pfn =3D min(max_pfn - 1, arm_dma_pfn_limit); - - /* - * Translate the device's DMA mask to a PFN limit. This - * PFN number includes the page which we can DMA to. - */ - return PHYS_PFN(dma_to_phys(dev, mask)) >=3D max_dma_pfn; -} -#endif - static void __dma_clear_buffer(struct page *page, size_t size, int coheren= t_flag) { /* @@ -1681,8 +1662,6 @@ static const struct dma_map_ops iommu_ops =3D { =20 .map_resource =3D arm_iommu_map_resource, .unmap_resource =3D arm_iommu_unmap_resource, - - .dma_supported =3D arm_dma_supported, }; =20 static const struct dma_map_ops iommu_coherent_ops =3D { @@ -1699,8 +1678,6 @@ static const struct dma_map_ops iommu_coherent_ops = =3D { =20 .map_resource =3D arm_iommu_map_resource, .unmap_resource =3D arm_iommu_unmap_resource, - - .dma_supported =3D arm_dma_supported, }; =20 /** --=20 2.35.3.dirty From nobody Fri May 15 09:16:20 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 922C7C4332F for ; Thu, 21 Apr 2022 11:37:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1388862AbiDULkA (ORCPT ); Thu, 21 Apr 2022 07:40:00 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34270 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1388852AbiDULj4 (ORCPT ); Thu, 21 Apr 2022 07:39:56 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id 9144222284 for ; Thu, 21 Apr 2022 04:37:06 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 557D21515; Thu, 21 Apr 2022 04:37:06 -0700 (PDT) Received: from e121345-lin.cambridge.arm.com (e121345-lin.cambridge.arm.com [10.1.196.40]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 56B4F3F5A1; Thu, 21 Apr 2022 04:37:05 -0700 (PDT) From: Robin Murphy To: hch@lst.de, linux@armlinux.org.uk Cc: linux-arm-kernel@lists.infradead.org, m.szyprowski@samsung.com, arnd@kernel.org, iommu@lists.linux-foundation.org, linux-kernel@vger.kernel.org Subject: [PATCH 2/3] ARM/dma-mapping: Consolidate IOMMU ops callbacks Date: Thu, 21 Apr 2022 12:36:58 +0100 Message-Id: X-Mailer: git-send-email 2.35.3.dirty In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Merge the coherent and non-coherent callbacks down to a single implementation each, relying on the generic dev->dma_coherent flag at the points where the difference matters. Signed-off-by: Robin Murphy --- arch/arm/mm/dma-mapping.c | 240 +++++++++----------------------------- 1 file changed, 56 insertions(+), 184 deletions(-) diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c index 6b0095b84a58..10e5e5800d78 100644 --- a/arch/arm/mm/dma-mapping.c +++ b/arch/arm/mm/dma-mapping.c @@ -1079,13 +1079,13 @@ static void __iommu_free_atomic(struct device *dev,= void *cpu_addr, __free_from_pool(cpu_addr, size); } =20 -static void *__arm_iommu_alloc_attrs(struct device *dev, size_t size, - dma_addr_t *handle, gfp_t gfp, unsigned long attrs, - int coherent_flag) +static void *arm_iommu_alloc_attrs(struct device *dev, size_t size, + dma_addr_t *handle, gfp_t gfp, unsigned long attrs) { pgprot_t prot =3D __get_dma_pgprot(attrs, PAGE_KERNEL); struct page **pages; void *addr =3D NULL; + int coherent_flag =3D dev->dma_coherent ? COHERENT : NORMAL; =20 *handle =3D DMA_MAPPING_ERROR; size =3D PAGE_ALIGN(size); @@ -1128,19 +1128,7 @@ static void *__arm_iommu_alloc_attrs(struct device *= dev, size_t size, return NULL; } =20 -static void *arm_iommu_alloc_attrs(struct device *dev, size_t size, - dma_addr_t *handle, gfp_t gfp, unsigned long attrs) -{ - return __arm_iommu_alloc_attrs(dev, size, handle, gfp, attrs, NORMAL); -} - -static void *arm_coherent_iommu_alloc_attrs(struct device *dev, size_t siz= e, - dma_addr_t *handle, gfp_t gfp, unsigned long attrs) -{ - return __arm_iommu_alloc_attrs(dev, size, handle, gfp, attrs, COHERENT); -} - -static int __arm_iommu_mmap_attrs(struct device *dev, struct vm_area_struc= t *vma, +static int arm_iommu_mmap_attrs(struct device *dev, struct vm_area_struct = *vma, void *cpu_addr, dma_addr_t dma_addr, size_t size, unsigned long attrs) { @@ -1154,35 +1142,24 @@ static int __arm_iommu_mmap_attrs(struct device *de= v, struct vm_area_struct *vma if (vma->vm_pgoff >=3D nr_pages) return -ENXIO; =20 + if (!dev->dma_coherent) + vma->vm_page_prot =3D __get_dma_pgprot(attrs, vma->vm_page_prot); + err =3D vm_map_pages(vma, pages, nr_pages); if (err) pr_err("Remapping memory failed: %d\n", err); =20 return err; } -static int arm_iommu_mmap_attrs(struct device *dev, - struct vm_area_struct *vma, void *cpu_addr, - dma_addr_t dma_addr, size_t size, unsigned long attrs) -{ - vma->vm_page_prot =3D __get_dma_pgprot(attrs, vma->vm_page_prot); - - return __arm_iommu_mmap_attrs(dev, vma, cpu_addr, dma_addr, size, attrs); -} - -static int arm_coherent_iommu_mmap_attrs(struct device *dev, - struct vm_area_struct *vma, void *cpu_addr, - dma_addr_t dma_addr, size_t size, unsigned long attrs) -{ - return __arm_iommu_mmap_attrs(dev, vma, cpu_addr, dma_addr, size, attrs); -} =20 /* * free a page as defined by the above mapping. * Must not be called with IRQs disabled. */ -static void __arm_iommu_free_attrs(struct device *dev, size_t size, void *= cpu_addr, - dma_addr_t handle, unsigned long attrs, int coherent_flag) +static void arm_iommu_free_attrs(struct device *dev, size_t size, void *cp= u_addr, + dma_addr_t handle, unsigned long attrs) { + int coherent_flag =3D dev->dma_coherent ? COHERENT : NORMAL; struct page **pages; size =3D PAGE_ALIGN(size); =20 @@ -1204,19 +1181,6 @@ static void __arm_iommu_free_attrs(struct device *de= v, size_t size, void *cpu_ad __iommu_free_buffer(dev, pages, size, attrs); } =20 -static void arm_iommu_free_attrs(struct device *dev, size_t size, - void *cpu_addr, dma_addr_t handle, - unsigned long attrs) -{ - __arm_iommu_free_attrs(dev, size, cpu_addr, handle, attrs, NORMAL); -} - -static void arm_coherent_iommu_free_attrs(struct device *dev, size_t size, - void *cpu_addr, dma_addr_t handle, unsigned long attrs) -{ - __arm_iommu_free_attrs(dev, size, cpu_addr, handle, attrs, COHERENT); -} - static int arm_iommu_get_sgtable(struct device *dev, struct sg_table *sgt, void *cpu_addr, dma_addr_t dma_addr, size_t size, unsigned long attrs) @@ -1236,8 +1200,7 @@ static int arm_iommu_get_sgtable(struct device *dev, = struct sg_table *sgt, */ static int __map_sg_chunk(struct device *dev, struct scatterlist *sg, size_t size, dma_addr_t *handle, - enum dma_data_direction dir, unsigned long attrs, - bool is_coherent) + enum dma_data_direction dir, unsigned long attrs) { struct dma_iommu_mapping *mapping =3D to_dma_iommu_mapping(dev); dma_addr_t iova, iova_base; @@ -1257,7 +1220,7 @@ static int __map_sg_chunk(struct device *dev, struct = scatterlist *sg, phys_addr_t phys =3D page_to_phys(sg_page(s)); unsigned int len =3D PAGE_ALIGN(s->offset + s->length); =20 - if (!is_coherent && (attrs & DMA_ATTR_SKIP_CPU_SYNC) =3D=3D 0) + if (!dev->dma_coherent && !(attrs & DMA_ATTR_SKIP_CPU_SYNC)) __dma_page_cpu_to_dev(sg_page(s), s->offset, s->length, dir); =20 prot =3D __dma_info_to_prot(dir, attrs); @@ -1277,9 +1240,20 @@ static int __map_sg_chunk(struct device *dev, struct= scatterlist *sg, return ret; } =20 -static int __iommu_map_sg(struct device *dev, struct scatterlist *sg, int = nents, - enum dma_data_direction dir, unsigned long attrs, - bool is_coherent) +/** + * arm_iommu_map_sg - map a set of SG buffers for streaming mode DMA + * @dev: valid struct device pointer + * @sg: list of buffers + * @nents: number of buffers to map + * @dir: DMA transfer direction + * + * Map a set of buffers described by scatterlist in streaming mode for DMA. + * The scatter gather list elements are merged together (if possible) and + * tagged with the appropriate dma address and length. They are obtained v= ia + * sg_dma_{address,length}. + */ +static int arm_iommu_map_sg(struct device *dev, struct scatterlist *sg, + int nents, enum dma_data_direction dir, unsigned long attrs) { struct scatterlist *s =3D sg, *dma =3D sg, *start =3D sg; int i, count =3D 0, ret; @@ -1294,8 +1268,7 @@ static int __iommu_map_sg(struct device *dev, struct = scatterlist *sg, int nents, =20 if (s->offset || (size & ~PAGE_MASK) || size + s->length > max) { ret =3D __map_sg_chunk(dev, start, size, - &dma->dma_address, dir, attrs, - is_coherent); + &dma->dma_address, dir, attrs); if (ret < 0) goto bad_mapping; =20 @@ -1309,8 +1282,7 @@ static int __iommu_map_sg(struct device *dev, struct = scatterlist *sg, int nents, } size +=3D s->length; } - ret =3D __map_sg_chunk(dev, start, size, &dma->dma_address, dir, attrs, - is_coherent); + ret =3D __map_sg_chunk(dev, start, size, &dma->dma_address, dir, attrs); if (ret < 0) goto bad_mapping; =20 @@ -1327,76 +1299,6 @@ static int __iommu_map_sg(struct device *dev, struct= scatterlist *sg, int nents, return -EINVAL; } =20 -/** - * arm_coherent_iommu_map_sg - map a set of SG buffers for streaming mode = DMA - * @dev: valid struct device pointer - * @sg: list of buffers - * @nents: number of buffers to map - * @dir: DMA transfer direction - * - * Map a set of i/o coherent buffers described by scatterlist in streaming - * mode for DMA. The scatter gather list elements are merged together (if - * possible) and tagged with the appropriate dma address and length. They = are - * obtained via sg_dma_{address,length}. - */ -static int arm_coherent_iommu_map_sg(struct device *dev, struct scatterlis= t *sg, - int nents, enum dma_data_direction dir, unsigned long attrs) -{ - return __iommu_map_sg(dev, sg, nents, dir, attrs, true); -} - -/** - * arm_iommu_map_sg - map a set of SG buffers for streaming mode DMA - * @dev: valid struct device pointer - * @sg: list of buffers - * @nents: number of buffers to map - * @dir: DMA transfer direction - * - * Map a set of buffers described by scatterlist in streaming mode for DMA. - * The scatter gather list elements are merged together (if possible) and - * tagged with the appropriate dma address and length. They are obtained v= ia - * sg_dma_{address,length}. - */ -static int arm_iommu_map_sg(struct device *dev, struct scatterlist *sg, - int nents, enum dma_data_direction dir, unsigned long attrs) -{ - return __iommu_map_sg(dev, sg, nents, dir, attrs, false); -} - -static void __iommu_unmap_sg(struct device *dev, struct scatterlist *sg, - int nents, enum dma_data_direction dir, - unsigned long attrs, bool is_coherent) -{ - struct scatterlist *s; - int i; - - for_each_sg(sg, s, nents, i) { - if (sg_dma_len(s)) - __iommu_remove_mapping(dev, sg_dma_address(s), - sg_dma_len(s)); - if (!is_coherent && (attrs & DMA_ATTR_SKIP_CPU_SYNC) =3D=3D 0) - __dma_page_dev_to_cpu(sg_page(s), s->offset, - s->length, dir); - } -} - -/** - * arm_coherent_iommu_unmap_sg - unmap a set of SG buffers mapped by dma_m= ap_sg - * @dev: valid struct device pointer - * @sg: list of buffers - * @nents: number of buffers to unmap (same as was passed to dma_map_sg) - * @dir: DMA transfer direction (same as was passed to dma_map_sg) - * - * Unmap a set of streaming mode DMA translations. Again, CPU access - * rules concerning calls here are the same as for dma_unmap_single(). - */ -static void arm_coherent_iommu_unmap_sg(struct device *dev, - struct scatterlist *sg, int nents, enum dma_data_direction dir, - unsigned long attrs) -{ - __iommu_unmap_sg(dev, sg, nents, dir, attrs, true); -} - /** * arm_iommu_unmap_sg - unmap a set of SG buffers mapped by dma_map_sg * @dev: valid struct device pointer @@ -1412,7 +1314,17 @@ static void arm_iommu_unmap_sg(struct device *dev, enum dma_data_direction dir, unsigned long attrs) { - __iommu_unmap_sg(dev, sg, nents, dir, attrs, false); + struct scatterlist *s; + int i; + + for_each_sg(sg, s, nents, i) { + if (sg_dma_len(s)) + __iommu_remove_mapping(dev, sg_dma_address(s), + sg_dma_len(s)); + if (!dev->dma_coherent && !(attrs & DMA_ATTR_SKIP_CPU_SYNC)) + __dma_page_dev_to_cpu(sg_page(s), s->offset, + s->length, dir); + } } =20 /** @@ -1452,18 +1364,17 @@ static void arm_iommu_sync_sg_for_device(struct dev= ice *dev, __dma_page_cpu_to_dev(sg_page(s), s->offset, s->length, dir); } =20 - /** - * arm_coherent_iommu_map_page + * arm_iommu_map_page * @dev: valid struct device pointer * @page: page that buffer resides in * @offset: offset into page for start of buffer * @size: size of buffer to map * @dir: DMA transfer direction * - * Coherent IOMMU aware version of arm_dma_map_page() + * IOMMU aware version of arm_dma_map_page() */ -static dma_addr_t arm_coherent_iommu_map_page(struct device *dev, struct p= age *page, +static dma_addr_t arm_iommu_map_page(struct device *dev, struct page *page, unsigned long offset, size_t size, enum dma_data_direction dir, unsigned long attrs) { @@ -1471,6 +1382,9 @@ static dma_addr_t arm_coherent_iommu_map_page(struct = device *dev, struct page *p dma_addr_t dma_addr; int ret, prot, len =3D PAGE_ALIGN(size + offset); =20 + if (!dev->dma_coherent && !(attrs & DMA_ATTR_SKIP_CPU_SYNC)) + __dma_page_cpu_to_dev(page, offset, size, dir); + dma_addr =3D __alloc_iova(mapping, len); if (dma_addr =3D=3D DMA_MAPPING_ERROR) return dma_addr; @@ -1487,50 +1401,6 @@ static dma_addr_t arm_coherent_iommu_map_page(struct= device *dev, struct page *p return DMA_MAPPING_ERROR; } =20 -/** - * arm_iommu_map_page - * @dev: valid struct device pointer - * @page: page that buffer resides in - * @offset: offset into page for start of buffer - * @size: size of buffer to map - * @dir: DMA transfer direction - * - * IOMMU aware version of arm_dma_map_page() - */ -static dma_addr_t arm_iommu_map_page(struct device *dev, struct page *page, - unsigned long offset, size_t size, enum dma_data_direction dir, - unsigned long attrs) -{ - if ((attrs & DMA_ATTR_SKIP_CPU_SYNC) =3D=3D 0) - __dma_page_cpu_to_dev(page, offset, size, dir); - - return arm_coherent_iommu_map_page(dev, page, offset, size, dir, attrs); -} - -/** - * arm_coherent_iommu_unmap_page - * @dev: valid struct device pointer - * @handle: DMA address of buffer - * @size: size of buffer (same as passed to dma_map_page) - * @dir: DMA transfer direction (same as passed to dma_map_page) - * - * Coherent IOMMU aware version of arm_dma_unmap_page() - */ -static void arm_coherent_iommu_unmap_page(struct device *dev, dma_addr_t h= andle, - size_t size, enum dma_data_direction dir, unsigned long attrs) -{ - struct dma_iommu_mapping *mapping =3D to_dma_iommu_mapping(dev); - dma_addr_t iova =3D handle & PAGE_MASK; - int offset =3D handle & ~PAGE_MASK; - int len =3D PAGE_ALIGN(size + offset); - - if (!iova) - return; - - iommu_unmap(mapping->domain, iova, len); - __free_iova(mapping, iova, len); -} - /** * arm_iommu_unmap_page * @dev: valid struct device pointer @@ -1545,15 +1415,17 @@ static void arm_iommu_unmap_page(struct device *dev= , dma_addr_t handle, { struct dma_iommu_mapping *mapping =3D to_dma_iommu_mapping(dev); dma_addr_t iova =3D handle & PAGE_MASK; - struct page *page =3D phys_to_page(iommu_iova_to_phys(mapping->domain, io= va)); + struct page *page; int offset =3D handle & ~PAGE_MASK; int len =3D PAGE_ALIGN(size + offset); =20 if (!iova) return; =20 - if ((attrs & DMA_ATTR_SKIP_CPU_SYNC) =3D=3D 0) + if (!dev->dma_coherent && !(attrs & DMA_ATTR_SKIP_CPU_SYNC)) { + page =3D phys_to_page(iommu_iova_to_phys(mapping->domain, iova)); __dma_page_dev_to_cpu(page, offset, size, dir); + } =20 iommu_unmap(mapping->domain, iova, len); __free_iova(mapping, iova, len); @@ -1665,16 +1537,16 @@ static const struct dma_map_ops iommu_ops =3D { }; =20 static const struct dma_map_ops iommu_coherent_ops =3D { - .alloc =3D arm_coherent_iommu_alloc_attrs, - .free =3D arm_coherent_iommu_free_attrs, - .mmap =3D arm_coherent_iommu_mmap_attrs, + .alloc =3D arm_iommu_alloc_attrs, + .free =3D arm_iommu_free_attrs, + .mmap =3D arm_iommu_mmap_attrs, .get_sgtable =3D arm_iommu_get_sgtable, =20 - .map_page =3D arm_coherent_iommu_map_page, - .unmap_page =3D arm_coherent_iommu_unmap_page, + .map_page =3D arm_iommu_map_page, + .unmap_page =3D arm_iommu_unmap_page, =20 - .map_sg =3D arm_coherent_iommu_map_sg, - .unmap_sg =3D arm_coherent_iommu_unmap_sg, + .map_sg =3D arm_iommu_map_sg, + .unmap_sg =3D arm_iommu_unmap_sg, =20 .map_resource =3D arm_iommu_map_resource, .unmap_resource =3D arm_iommu_unmap_resource, --=20 2.35.3.dirty From nobody Fri May 15 09:16:20 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A4D45C433EF for ; Thu, 21 Apr 2022 11:37:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1388869AbiDULkE (ORCPT ); Thu, 21 Apr 2022 07:40:04 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34288 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1388855AbiDULj5 (ORCPT ); Thu, 21 Apr 2022 07:39:57 -0400 Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by lindbergh.monkeyblade.net (Postfix) with ESMTP id BB0622229E for ; Thu, 21 Apr 2022 04:37:07 -0700 (PDT) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 88B6E1516; Thu, 21 Apr 2022 04:37:07 -0700 (PDT) Received: from e121345-lin.cambridge.arm.com (e121345-lin.cambridge.arm.com [10.1.196.40]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 89C773F5A1; Thu, 21 Apr 2022 04:37:06 -0700 (PDT) From: Robin Murphy To: hch@lst.de, linux@armlinux.org.uk Cc: linux-arm-kernel@lists.infradead.org, m.szyprowski@samsung.com, arnd@kernel.org, iommu@lists.linux-foundation.org, linux-kernel@vger.kernel.org Subject: [PATCH 3/3] ARM/dma-mapping: Merge IOMMU ops Date: Thu, 21 Apr 2022 12:36:59 +0100 Message-Id: <394482083c51c7596587a87edf012010f846feb0.1650539846.git.robin.murphy@arm.com> X-Mailer: git-send-email 2.35.3.dirty In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" The dma_sync_* operations are now the only difference between the coherent and non-coherent IOMMU ops. Some minor tweaks to make those safe for coherent devices with minimal overhead, and we can condense down to a single set of DMA ops. Signed-off-by: Robin Murphy --- arch/arm/mm/dma-mapping.c | 37 +++++++++++++------------------------ 1 file changed, 13 insertions(+), 24 deletions(-) diff --git a/arch/arm/mm/dma-mapping.c b/arch/arm/mm/dma-mapping.c index 10e5e5800d78..dd46cce61579 100644 --- a/arch/arm/mm/dma-mapping.c +++ b/arch/arm/mm/dma-mapping.c @@ -1341,6 +1341,9 @@ static void arm_iommu_sync_sg_for_cpu(struct device *= dev, struct scatterlist *s; int i; =20 + if (dev->dma_coherent) + return; + for_each_sg(sg, s, nents, i) __dma_page_dev_to_cpu(sg_page(s), s->offset, s->length, dir); =20 @@ -1360,6 +1363,9 @@ static void arm_iommu_sync_sg_for_device(struct devic= e *dev, struct scatterlist *s; int i; =20 + if (dev->dma_coherent) + return; + for_each_sg(sg, s, nents, i) __dma_page_cpu_to_dev(sg_page(s), s->offset, s->length, dir); } @@ -1493,12 +1499,13 @@ static void arm_iommu_sync_single_for_cpu(struct de= vice *dev, { struct dma_iommu_mapping *mapping =3D to_dma_iommu_mapping(dev); dma_addr_t iova =3D handle & PAGE_MASK; - struct page *page =3D phys_to_page(iommu_iova_to_phys(mapping->domain, io= va)); + struct page *page; unsigned int offset =3D handle & ~PAGE_MASK; =20 - if (!iova) + if (dev->dma_coherent || !iova) return; =20 + page =3D phys_to_page(iommu_iova_to_phys(mapping->domain, iova)); __dma_page_dev_to_cpu(page, offset, size, dir); } =20 @@ -1507,12 +1514,13 @@ static void arm_iommu_sync_single_for_device(struct= device *dev, { struct dma_iommu_mapping *mapping =3D to_dma_iommu_mapping(dev); dma_addr_t iova =3D handle & PAGE_MASK; - struct page *page =3D phys_to_page(iommu_iova_to_phys(mapping->domain, io= va)); + struct page *page; unsigned int offset =3D handle & ~PAGE_MASK; =20 - if (!iova) + if (dev->dma_coherent || !iova) return; =20 + page =3D phys_to_page(iommu_iova_to_phys(mapping->domain, iova)); __dma_page_cpu_to_dev(page, offset, size, dir); } =20 @@ -1536,22 +1544,6 @@ static const struct dma_map_ops iommu_ops =3D { .unmap_resource =3D arm_iommu_unmap_resource, }; =20 -static const struct dma_map_ops iommu_coherent_ops =3D { - .alloc =3D arm_iommu_alloc_attrs, - .free =3D arm_iommu_free_attrs, - .mmap =3D arm_iommu_mmap_attrs, - .get_sgtable =3D arm_iommu_get_sgtable, - - .map_page =3D arm_iommu_map_page, - .unmap_page =3D arm_iommu_unmap_page, - - .map_sg =3D arm_iommu_map_sg, - .unmap_sg =3D arm_iommu_unmap_sg, - - .map_resource =3D arm_iommu_map_resource, - .unmap_resource =3D arm_iommu_unmap_resource, -}; - /** * arm_iommu_create_mapping * @bus: pointer to the bus holding the client device (for IOMMU calls) @@ -1750,10 +1742,7 @@ static void arm_setup_iommu_dma_ops(struct device *d= ev, u64 dma_base, u64 size, return; } =20 - if (coherent) - set_dma_ops(dev, &iommu_coherent_ops); - else - set_dma_ops(dev, &iommu_ops); + set_dma_ops(dev, &iommu_ops); } =20 static void arm_teardown_iommu_dma_ops(struct device *dev) --=20 2.35.3.dirty