From nobody Fri Jan 9 08:33:36 2026 Received: from mail-pj1-f49.google.com (mail-pj1-f49.google.com [209.85.216.49]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 41937279907 for ; Fri, 26 Dec 2025 22:53:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.49 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766789632; cv=none; b=lqmqDfH9XO0xyb3/Ijdf8jo8FNUPGv9xO9eEmP8bhQuziBW0RKlo8Sdcp/Nrx1YiUlu8zcAnZlSWTTWlCgnV5G+YgjzEKu6bXP0cALZ/3LuFZ2b4kF+PrYwUhyRVczKQEmajxLDAzztqCXI5yLGdYZkah8EWZnWavT6o7kHi7tA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1766789632; c=relaxed/simple; bh=H+JcZ3ALFKjMEK+ridC5ARPUqoSBVPZTR86r1nheB4A=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=gOUWt/wt5e1Bg5dETbzjTEWsx05FAwqNWrMIymKLEymVrydo3zdTcNl0t4/QTBlJf/Exm/+4ftvFXd3J7xqUag/+3hzXUUjD59N3Von83g3NDEj5fLRcb3fvx6rK0evUEk4IiEhw5wJz1YLiqmiV2HvSYoK+mGl9Bsd//V/HhXY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=Yr5Cw/y/; arc=none smtp.client-ip=209.85.216.49 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="Yr5Cw/y/" Received: by mail-pj1-f49.google.com with SMTP id 98e67ed59e1d1-34a4078f669so7787372a91.1 for ; Fri, 26 Dec 2025 14:53:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1766789629; x=1767394429; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=9CBlUJRZIbYM1prFWy/0G8KDccWD/YxTB/AJqWrZ/Ys=; b=Yr5Cw/y/devJP426FsG6TE6rRdQOhibXqPOfCNy6UAHA2QtPzSdVCoCfqaIWY6a+23 3ke37qty3xEmLFG0SRFznnmh29/UKP4hu71jJBbPgGqrlkWlY25CCwIjR0t4cZkTY2Qf 22SFmv64NSx0Pm0uJwauZ2zXfkiq2PQeuosTjxJ1F0mpMWeWaVKGDAZyJHU6gxP5p1fO kaiogMdg9PTJc0rpG4LNoOTTkXvJSv4Rjeh3B/FPYs2HoA8+SNYIu9KOe+pIQ1B9U6nN 6anKQwNwq+8LCbVsSM+rOZqXdPzt1kEOmvgwJx4yJZ+Hw13Rf/I9tbeClNIgLzsGehhY 0v0A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1766789629; x=1767394429; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=9CBlUJRZIbYM1prFWy/0G8KDccWD/YxTB/AJqWrZ/Ys=; b=rTW2l8xxY2BSzVtYK2iCP6szCoGkClsxGYmpoAOLHquHyZPCmbtEB7puHEIhFcKv6z yQLXmrI8tsd4lNnb/PY23Fog4ksCJZ0mc+utqTMBwwlzhcIJFBYLEFDf14yM5GbcBC8T pbxGRKBfWcnR1Awn8eEGGtc6both+woc1BUverfGBcO/qfYG7DBgvWQrVyg62rY3p6Q/ G3+/dcUOm9JlmU3HkIxVCbv0hYUsdPN9AjzpBhRkukS7Prka0kIGveWMkqliB5TSkCZr VlnQfKoxN7LUMe84j3FETPjPbrohQY/ZabfJXekRlAfylmgxtrJXHKfqZLJ+vVaqMT1W hsIw== X-Gm-Message-State: AOJu0Yz2RyveKgO6HRoDhppwSl9w3X/oaz9GqPTttn3OqEGZrDgukB3M 7MV4YDpzeR3i5EV5vsBct+qmlQXOGgnIMKO2hjoy8s5r7DSiDTryRs2A X-Gm-Gg: AY/fxX5wcJefM9qEWl4P8Zy4a4TJyM7Cf4PzWhuNEFBNb8J4ZIfUqqFwPP+DsRQmtJF tBuhPnXeFdc1MzrTIAeHa7RSRtJDRu4/XIT9zT/xbXJpVgJdU60gdt1c7L5qpJg0A4Xo+GXXtii Vmok/50cD1MCgyyqeOnzNgJWhYulL3MfNiClB45hlY8Xd4Wl7B55n1r6VNUf+WWkKnLpLSgPR1b T9A+ElSNHK1R8wAf3nAxORYwnMcyEH9O6kwkf2IPMp0T4HDKPTxPz5LaK3fu+sRPzrOLOJbnLwt qgMb50rRlQealo6jmL+K5yeDfCmKEEfo3WWfWdp8z8LGCU43Qa+Jeis3VlGLfmqpeoik6qGfKOq IrxGwLTR/ZCYBWHE53PM/PizP1CZLoSxl0CskkuBfvs2E/r4/O5hNS/2uuOMvM5zinlfggGDSBt Ka2AvIWQdp3BC+ X-Google-Smtp-Source: AGHT+IF6GTCerKOqMOKSYSeLfAiimvUN3+7D1SxAsR14bBFsurGjwmGev9rAORcjp7S0YIZTn6CThQ== X-Received: by 2002:a17:90b:560c:b0:32d:e780:e9d5 with SMTP id 98e67ed59e1d1-34e921c3003mr16987619a91.22.1766789629137; Fri, 26 Dec 2025 14:53:49 -0800 (PST) Received: from barry-desktop.hub ([47.72.129.29]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-34e772ac1acsm9981428a91.9.2025.12.26.14.53.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 26 Dec 2025 14:53:48 -0800 (PST) From: Barry Song <21cnbao@gmail.com> To: catalin.marinas@arm.com, m.szyprowski@samsung.com, robin.murphy@arm.com, will@kernel.org, iommu@lists.linux.dev, linux-arm-kernel@lists.infradead.org Cc: linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org, Barry Song , Leon Romanovsky , Ada Couprie Diaz , Ard Biesheuvel , Marc Zyngier , Anshuman Khandual , Ryan Roberts , Suren Baghdasaryan , Joerg Roedel , Juergen Gross , Stefano Stabellini , Oleksandr Tyshchenko , Tangquan Zheng Subject: [PATCH v2 4/8] dma-mapping: Separate DMA sync issuing and completion waiting Date: Sat, 27 Dec 2025 11:52:44 +1300 Message-ID: <20251226225254.46197-5-21cnbao@gmail.com> X-Mailer: git-send-email 2.48.1 In-Reply-To: <20251226225254.46197-1-21cnbao@gmail.com> References: <20251226225254.46197-1-21cnbao@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Barry Song Currently, arch_sync_dma_for_cpu and arch_sync_dma_for_device always wait for the completion of each DMA buffer. That is, issuing the DMA sync and waiting for completion is done in a single API call. For scatter-gather lists with multiple entries, this means issuing and waiting is repeated for each entry, which can hurt performance. Architectures like ARM64 may be able to issue all DMA sync operations for all entries first and then wait for completion together. To address this, arch_sync_dma_for_* now issues DMA operations in batch, followed by a flush. On ARM64, the flush is implemented using a dsb instruction within arch_sync_dma_flush(). For now, add arch_sync_dma_flush() after each arch_sync_dma_for_*() call. arch_sync_dma_flush() is defined as a no-op on all architectures except arm64, so this patch does not change existing behavior. Subsequent patches will introduce true batching for SG DMA buffers. Cc: Leon Romanovsky Cc: Catalin Marinas Cc: Will Deacon Cc: Marek Szyprowski Cc: Robin Murphy Cc: Ada Couprie Diaz Cc: Ard Biesheuvel Cc: Marc Zyngier Cc: Anshuman Khandual Cc: Ryan Roberts Cc: Suren Baghdasaryan Cc: Joerg Roedel Cc: Juergen Gross Cc: Stefano Stabellini Cc: Oleksandr Tyshchenko Cc: Tangquan Zheng Signed-off-by: Barry Song Reviewed-by: Juergen Gross # drivers/xen/swiotlb-xen.c Reviewed-by: Leon Romanovsky --- arch/arm64/include/asm/cache.h | 6 ++++++ arch/arm64/mm/dma-mapping.c | 4 ++-- drivers/iommu/dma-iommu.c | 37 +++++++++++++++++++++++++--------- drivers/xen/swiotlb-xen.c | 24 ++++++++++++++-------- include/linux/dma-map-ops.h | 6 ++++++ kernel/dma/direct.c | 8 ++++++-- kernel/dma/direct.h | 9 +++++++-- kernel/dma/swiotlb.c | 4 +++- 8 files changed, 73 insertions(+), 25 deletions(-) diff --git a/arch/arm64/include/asm/cache.h b/arch/arm64/include/asm/cache.h index dd2c8586a725..487fb7c355ed 100644 --- a/arch/arm64/include/asm/cache.h +++ b/arch/arm64/include/asm/cache.h @@ -87,6 +87,12 @@ int cache_line_size(void); =20 #define dma_get_cache_alignment cache_line_size =20 +static inline void arch_sync_dma_flush(void) +{ + dsb(sy); +} +#define arch_sync_dma_flush arch_sync_dma_flush + /* Compress a u64 MPIDR value into 32 bits. */ static inline u64 arch_compact_of_hwid(u64 id) { diff --git a/arch/arm64/mm/dma-mapping.c b/arch/arm64/mm/dma-mapping.c index b2b5792b2caa..ae1ae0280eef 100644 --- a/arch/arm64/mm/dma-mapping.c +++ b/arch/arm64/mm/dma-mapping.c @@ -17,7 +17,7 @@ void arch_sync_dma_for_device(phys_addr_t paddr, size_t s= ize, { unsigned long start =3D (unsigned long)phys_to_virt(paddr); =20 - dcache_clean_poc(start, start + size); + dcache_clean_poc_nosync(start, start + size); } =20 void arch_sync_dma_for_cpu(phys_addr_t paddr, size_t size, @@ -28,7 +28,7 @@ void arch_sync_dma_for_cpu(phys_addr_t paddr, size_t size, if (dir =3D=3D DMA_TO_DEVICE) return; =20 - dcache_inval_poc(start, start + size); + dcache_inval_poc_nosync(start, start + size); } =20 void arch_dma_prep_coherent(struct page *page, size_t size) diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index c92088855450..6827763a3877 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -1095,8 +1095,10 @@ void iommu_dma_sync_single_for_cpu(struct device *de= v, dma_addr_t dma_handle, return; =20 phys =3D iommu_iova_to_phys(iommu_get_dma_domain(dev), dma_handle); - if (!dev_is_dma_coherent(dev)) + if (!dev_is_dma_coherent(dev)) { arch_sync_dma_for_cpu(phys, size, dir); + arch_sync_dma_flush(); + } =20 swiotlb_sync_single_for_cpu(dev, phys, size, dir); } @@ -1112,8 +1114,10 @@ void iommu_dma_sync_single_for_device(struct device = *dev, dma_addr_t dma_handle, phys =3D iommu_iova_to_phys(iommu_get_dma_domain(dev), dma_handle); swiotlb_sync_single_for_device(dev, phys, size, dir); =20 - if (!dev_is_dma_coherent(dev)) + if (!dev_is_dma_coherent(dev)) { arch_sync_dma_for_device(phys, size, dir); + arch_sync_dma_flush(); + } } =20 void iommu_dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sgl, @@ -1122,13 +1126,16 @@ void iommu_dma_sync_sg_for_cpu(struct device *dev, = struct scatterlist *sgl, struct scatterlist *sg; int i; =20 - if (sg_dma_is_swiotlb(sgl)) + if (sg_dma_is_swiotlb(sgl)) { for_each_sg(sgl, sg, nelems, i) iommu_dma_sync_single_for_cpu(dev, sg_dma_address(sg), sg->length, dir); - else if (!dev_is_dma_coherent(dev)) - for_each_sg(sgl, sg, nelems, i) + } else if (!dev_is_dma_coherent(dev)) { + for_each_sg(sgl, sg, nelems, i) { arch_sync_dma_for_cpu(sg_phys(sg), sg->length, dir); + arch_sync_dma_flush(); + } + } } =20 void iommu_dma_sync_sg_for_device(struct device *dev, struct scatterlist *= sgl, @@ -1143,8 +1150,10 @@ void iommu_dma_sync_sg_for_device(struct device *dev= , struct scatterlist *sgl, sg_dma_address(sg), sg->length, dir); else if (!dev_is_dma_coherent(dev)) - for_each_sg(sgl, sg, nelems, i) + for_each_sg(sgl, sg, nelems, i) { arch_sync_dma_for_device(sg_phys(sg), sg->length, dir); + arch_sync_dma_flush(); + } } =20 static phys_addr_t iommu_dma_map_swiotlb(struct device *dev, phys_addr_t p= hys, @@ -1219,8 +1228,10 @@ dma_addr_t iommu_dma_map_phys(struct device *dev, ph= ys_addr_t phys, size_t size, return DMA_MAPPING_ERROR; } =20 - if (!coherent && !(attrs & (DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_MMIO))) + if (!coherent && !(attrs & (DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_MMIO))) { arch_sync_dma_for_device(phys, size, dir); + arch_sync_dma_flush(); + } =20 iova =3D __iommu_dma_map(dev, phys, size, prot, dma_mask); if (iova =3D=3D DMA_MAPPING_ERROR && !(attrs & DMA_ATTR_MMIO)) @@ -1242,8 +1253,10 @@ void iommu_dma_unmap_phys(struct device *dev, dma_ad= dr_t dma_handle, if (WARN_ON(!phys)) return; =20 - if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) && !dev_is_dma_coherent(dev)) + if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) && !dev_is_dma_coherent(dev)) { arch_sync_dma_for_cpu(phys, size, dir); + arch_sync_dma_flush(); + } =20 __iommu_dma_unmap(dev, dma_handle, size); =20 @@ -1836,8 +1849,10 @@ static int __dma_iova_link(struct device *dev, dma_a= ddr_t addr, bool coherent =3D dev_is_dma_coherent(dev); int prot =3D dma_info_to_prot(dir, coherent, attrs); =20 - if (!coherent && !(attrs & (DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_MMIO))) + if (!coherent && !(attrs & (DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_MMIO))) { arch_sync_dma_for_device(phys, size, dir); + arch_sync_dma_flush(); + } =20 return iommu_map_nosync(iommu_get_dma_domain(dev), addr, phys, size, prot, GFP_ATOMIC); @@ -2008,8 +2023,10 @@ static void iommu_dma_iova_unlink_range_slow(struct = device *dev, end - addr, iovad->granule - iova_start_pad); =20 if (!dev_is_dma_coherent(dev) && - !(attrs & (DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_MMIO))) + !(attrs & (DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_MMIO))) { arch_sync_dma_for_cpu(phys, len, dir); + arch_sync_dma_flush(); + } =20 swiotlb_tbl_unmap_single(dev, phys, len, dir, attrs); =20 diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c index ccf25027bec1..b79917e785a5 100644 --- a/drivers/xen/swiotlb-xen.c +++ b/drivers/xen/swiotlb-xen.c @@ -262,10 +262,12 @@ static dma_addr_t xen_swiotlb_map_phys(struct device = *dev, phys_addr_t phys, =20 done: if (!dev_is_dma_coherent(dev) && !(attrs & DMA_ATTR_SKIP_CPU_SYNC)) { - if (pfn_valid(PFN_DOWN(dma_to_phys(dev, dev_addr)))) + if (pfn_valid(PFN_DOWN(dma_to_phys(dev, dev_addr)))) { arch_sync_dma_for_device(phys, size, dir); - else + arch_sync_dma_flush(); + } else { xen_dma_sync_for_device(dev, dev_addr, size, dir); + } } return dev_addr; } @@ -287,10 +289,12 @@ static void xen_swiotlb_unmap_phys(struct device *hwd= ev, dma_addr_t dev_addr, BUG_ON(dir =3D=3D DMA_NONE); =20 if (!dev_is_dma_coherent(hwdev) && !(attrs & DMA_ATTR_SKIP_CPU_SYNC)) { - if (pfn_valid(PFN_DOWN(dma_to_phys(hwdev, dev_addr)))) + if (pfn_valid(PFN_DOWN(dma_to_phys(hwdev, dev_addr)))) { arch_sync_dma_for_cpu(paddr, size, dir); - else + arch_sync_dma_flush(); + } else { xen_dma_sync_for_cpu(hwdev, dev_addr, size, dir); + } } =20 /* NOTE: We use dev_addr here, not paddr! */ @@ -308,10 +312,12 @@ xen_swiotlb_sync_single_for_cpu(struct device *dev, d= ma_addr_t dma_addr, struct io_tlb_pool *pool; =20 if (!dev_is_dma_coherent(dev)) { - if (pfn_valid(PFN_DOWN(dma_to_phys(dev, dma_addr)))) + if (pfn_valid(PFN_DOWN(dma_to_phys(dev, dma_addr)))) { arch_sync_dma_for_cpu(paddr, size, dir); - else + arch_sync_dma_flush(); + } else { xen_dma_sync_for_cpu(dev, dma_addr, size, dir); + } } =20 pool =3D xen_swiotlb_find_pool(dev, dma_addr); @@ -331,10 +337,12 @@ xen_swiotlb_sync_single_for_device(struct device *dev= , dma_addr_t dma_addr, __swiotlb_sync_single_for_device(dev, paddr, size, dir, pool); =20 if (!dev_is_dma_coherent(dev)) { - if (pfn_valid(PFN_DOWN(dma_to_phys(dev, dma_addr)))) + if (pfn_valid(PFN_DOWN(dma_to_phys(dev, dma_addr)))) { arch_sync_dma_for_device(paddr, size, dir); - else + arch_sync_dma_flush(); + } else { xen_dma_sync_for_device(dev, dma_addr, size, dir); + } } } =20 diff --git a/include/linux/dma-map-ops.h b/include/linux/dma-map-ops.h index 4809204c674c..e7dd8a63b40e 100644 --- a/include/linux/dma-map-ops.h +++ b/include/linux/dma-map-ops.h @@ -361,6 +361,12 @@ static inline void arch_sync_dma_for_cpu(phys_addr_t p= addr, size_t size, } #endif /* ARCH_HAS_SYNC_DMA_FOR_CPU */ =20 +#ifndef arch_sync_dma_flush +static inline void arch_sync_dma_flush(void) +{ +} +#endif + #ifdef CONFIG_ARCH_HAS_SYNC_DMA_FOR_CPU_ALL void arch_sync_dma_for_cpu_all(void); #else diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index 50c3fe2a1d55..a219911c7b90 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -402,9 +402,11 @@ void dma_direct_sync_sg_for_device(struct device *dev, =20 swiotlb_sync_single_for_device(dev, paddr, sg->length, dir); =20 - if (!dev_is_dma_coherent(dev)) + if (!dev_is_dma_coherent(dev)) { arch_sync_dma_for_device(paddr, sg->length, dir); + arch_sync_dma_flush(); + } } } #endif @@ -421,8 +423,10 @@ void dma_direct_sync_sg_for_cpu(struct device *dev, for_each_sg(sgl, sg, nents, i) { phys_addr_t paddr =3D dma_to_phys(dev, sg_dma_address(sg)); =20 - if (!dev_is_dma_coherent(dev)) + if (!dev_is_dma_coherent(dev)) { arch_sync_dma_for_cpu(paddr, sg->length, dir); + arch_sync_dma_flush(); + } =20 swiotlb_sync_single_for_cpu(dev, paddr, sg->length, dir); =20 diff --git a/kernel/dma/direct.h b/kernel/dma/direct.h index da2fadf45bcd..a69326eed266 100644 --- a/kernel/dma/direct.h +++ b/kernel/dma/direct.h @@ -60,8 +60,10 @@ static inline void dma_direct_sync_single_for_device(str= uct device *dev, =20 swiotlb_sync_single_for_device(dev, paddr, size, dir); =20 - if (!dev_is_dma_coherent(dev)) + if (!dev_is_dma_coherent(dev)) { arch_sync_dma_for_device(paddr, size, dir); + arch_sync_dma_flush(); + } } =20 static inline void dma_direct_sync_single_for_cpu(struct device *dev, @@ -71,6 +73,7 @@ static inline void dma_direct_sync_single_for_cpu(struct = device *dev, =20 if (!dev_is_dma_coherent(dev)) { arch_sync_dma_for_cpu(paddr, size, dir); + arch_sync_dma_flush(); arch_sync_dma_for_cpu_all(); } =20 @@ -109,8 +112,10 @@ static inline dma_addr_t dma_direct_map_phys(struct de= vice *dev, } =20 if (!dev_is_dma_coherent(dev) && - !(attrs & (DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_MMIO))) + !(attrs & (DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_MMIO))) { arch_sync_dma_for_device(phys, size, dir); + arch_sync_dma_flush(); + } return dma_addr; =20 err_overflow: diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c index a547c7693135..7cdbfcdfef86 100644 --- a/kernel/dma/swiotlb.c +++ b/kernel/dma/swiotlb.c @@ -1595,8 +1595,10 @@ dma_addr_t swiotlb_map(struct device *dev, phys_addr= _t paddr, size_t size, return DMA_MAPPING_ERROR; } =20 - if (!dev_is_dma_coherent(dev) && !(attrs & DMA_ATTR_SKIP_CPU_SYNC)) + if (!dev_is_dma_coherent(dev) && !(attrs & DMA_ATTR_SKIP_CPU_SYNC)) { arch_sync_dma_for_device(swiotlb_addr, size, dir); + arch_sync_dma_flush(); + } return dma_addr; } =20 --=20 2.43.0