From nobody Tue Apr 7 02:34:19 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 170143ED5CE; Mon, 16 Mar 2026 19:07:12 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773688033; cv=none; b=WR76bz4W6mxRvFJRqgArVp0xLlV8OcJPORRLabDQvyjx2VxiwCB6zXsUDP2eAKnUYQegZM09+ckQ9DYkTF4hZcSzOz6jVMetZFxNx96wY0T9jIDgjKf4rnwa99Csv3+Y2eOOnqoK+0ezxt05s67kFU+10G0SRrbEzYxJGmC2blY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773688033; c=relaxed/simple; bh=Rb7pGPHPDn2aZQQvadJE9PAK4yDKeRrpgUBQyVOU34k=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=h1kiPuBuH/dtAG3jilt4BZkXhI9/159XcOYUnxLi5rYvzN9Vx0w95Rv7a+1BO4Fx9smYMWKnFaGTihvIDtLRnsVHphqo8lUI12Syx5KGvQMRB9GrtathLjX7vRXczJy2mBUe13waMMxle2FElibQIYfRhtBWrq9Oy1z1ilurn2U= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=eORwWv8x; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="eORwWv8x" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 69441C19421; Mon, 16 Mar 2026 19:07:11 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773688032; bh=Rb7pGPHPDn2aZQQvadJE9PAK4yDKeRrpgUBQyVOU34k=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=eORwWv8x1cSy0UTUY8pVtv6G5sqwmV0rxH/aGmPhbh0bWAxBsej4fo5VDd68Il4++ hSYt5om9TSoqUauXYHFaGrdOW+kf/bzoE70Xzal7ZYyl5jQ/FURjLyc8PcCW2WxHTd FSAjy0UIc3Di5O2m1x7k0wQF+fX5vai5me6X+KrjSId8BW/F3dpqlNcwHQQavmEJpv BR9Ik00Blyvb1glcJA1JiuYR/P8y/qNj8YfYPhExnVTuQ4MS3BWu8c8rbZEyCpseoF Q8v7CqGGksF626+8TaK9Oi9x2zWNfVb3ZIKRehMXFgWnucvXYKOvQU6Z2I1KOaJTxY VsL14Ps2mhNzA== From: Leon Romanovsky To: Marek Szyprowski , Robin Murphy , "Michael S. Tsirkin" , Petr Tesarik , Jonathan Corbet , Shuah Khan , Jason Wang , Xuan Zhuo , =?utf-8?q?Eugenio_P=C3=A9rez?= , Jason Gunthorpe , Leon Romanovsky , Steven Rostedt , Masami Hiramatsu , Mathieu Desnoyers , Joerg Roedel , Will Deacon , Andrew Morton Cc: iommu@lists.linux.dev, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, virtualization@lists.linux.dev, linux-rdma@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v3 1/8] dma-debug: Allow multiple invocations of overlapping entries Date: Mon, 16 Mar 2026 21:06:45 +0200 Message-ID: <20260316-dma-debug-overlap-v3-1-1dde90a7f08b@nvidia.com> X-Mailer: git-send-email 2.53.0 In-Reply-To: <20260316-dma-debug-overlap-v3-0-1dde90a7f08b@nvidia.com> References: <20260316-dma-debug-overlap-v3-0-1dde90a7f08b@nvidia.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" X-Mailer: b4 0.15-dev-18f8f Content-Transfer-Encoding: quoted-printable From: Leon Romanovsky Repeated DMA mappings with DMA_ATTR_CPU_CACHE_CLEAN trigger the following splat. This prevents using the attribute in cases where a DMA region is shared and reused more than seven times. ------------[ cut here ]------------ DMA-API: exceeded 7 overlapping mappings of cacheline 0x000000000438c440 WARNING: kernel/dma/debug.c:467 at add_dma_entry+0x219/0x280, CPU#4: ibv_r= c_pingpong/1644 Modules linked in: xt_conntrack xt_MASQUERADE nf_conntrack_netlink nfnetli= nk iptable_nat nf_nat xt_addrtype br_netfilter rpcsec_gss_krb5 auth_rpcgss = oid_registry overlay mlx5_fwctl zram zsmalloc mlx5_ib fuse rpcrdma rdma_ucm= ib_uverbs ib_iser libiscsi scsi_transport_iscsi ib_umad rdma_cm ib_ipoib i= w_cm ib_cm mlx5_core ib_core CPU: 4 UID: 2733 PID: 1644 Comm: ibv_rc_pingpong Not tainted 6.19.0+ #129 = PREEMPT Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.13.0-0-gf21= b5a4aeb02-prebuilt.qemu.org 04/01/2014 RIP: 0010:add_dma_entry+0x221/0x280 Code: c0 0f 84 f2 fe ff ff 83 e8 01 89 05 6d 99 11 01 e9 e4 fe ff ff 0f 8e= 1f ff ff ff 48 8d 3d 07 ef 2d 01 be 07 00 00 00 48 89 e2 <67> 48 0f b9 3a = e9 06 ff ff ff 48 c7 c7 98 05 2b 82 c6 05 72 92 28 RSP: 0018:ff1100010e657970 EFLAGS: 00010002 RAX: 0000000000000007 RBX: ff1100010234eb00 RCX: 0000000000000000 RDX: ff1100010e657970 RSI: 0000000000000007 RDI: ffffffff82678660 RBP: 000000000438c440 R08: 0000000000000228 R09: 0000000000000000 R10: 00000000000001be R11: 000000000000089d R12: 0000000000000800 R13: 00000000ffffffef R14: 0000000000000202 R15: ff1100010234eb00 FS: 00007fb15f3f6740(0000) GS:ff110008dcc19000(0000) knlGS:00000000000000= 00 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 00007fb15f32d3a0 CR3: 0000000116f59001 CR4: 0000000000373eb0 Call Trace: debug_dma_map_sg+0x1b4/0x390 __dma_map_sg_attrs+0x6d/0x1a0 dma_map_sgtable+0x19/0x30 ib_umem_get+0x284/0x3b0 [ib_uverbs] mlx5_ib_reg_user_mr+0x68/0x2a0 [mlx5_ib] ib_uverbs_reg_mr+0x17f/0x2a0 [ib_uverbs] ib_uverbs_handler_UVERBS_METHOD_INVOKE_WRITE+0xc2/0x130 [ib_uverbs] ib_uverbs_cmd_verbs+0xa0b/0xae0 [ib_uverbs] ? ib_uverbs_handler_UVERBS_METHOD_QUERY_PORT_SPEED+0xe0/0xe0 [ib_uverbs] ? mmap_region+0x7a/0xb0 ? do_mmap+0x3b8/0x5c0 ib_uverbs_ioctl+0xa7/0x110 [ib_uverbs] __x64_sys_ioctl+0x14f/0x8b0 ? ksys_mmap_pgoff+0xc5/0x190 do_syscall_64+0x8c/0xbf0 entry_SYSCALL_64_after_hwframe+0x4b/0x53 RIP: 0033:0x7fb15f5e4eed Code: 04 25 28 00 00 00 48 89 45 c8 31 c0 48 8d 45 10 c7 45 b0 10 00 00 00= 48 89 45 b8 48 8d 45 d0 48 89 45 c0 b8 10 00 00 00 0f 05 <89> c2 3d 00 f0 = ff ff 77 1a 48 8b 45 c8 64 48 2b 04 25 28 00 00 00 RSP: 002b:00007ffe09a5c540 EFLAGS: 00000246 ORIG_RAX: 0000000000000010 RAX: ffffffffffffffda RBX: 00007ffe09a5c5d0 RCX: 00007fb15f5e4eed RDX: 00007ffe09a5c5f0 RSI: 00000000c0181b01 RDI: 0000000000000003 RBP: 00007ffe09a5c590 R08: 0000000000000028 R09: 00007ffe09a5c794 R10: 0000000000000001 R11: 0000000000000246 R12: 00007ffe09a5c794 R13: 000000000000000c R14: 0000000025a49170 R15: 000000000000000c ---[ end trace 0000000000000000 ]--- Fixes: 61868dc55a11 ("dma-mapping: add DMA_ATTR_CPU_CACHE_CLEAN") Signed-off-by: Leon Romanovsky --- kernel/dma/debug.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/kernel/dma/debug.c b/kernel/dma/debug.c index 86f87e43438c3..be207be749968 100644 --- a/kernel/dma/debug.c +++ b/kernel/dma/debug.c @@ -453,7 +453,7 @@ static int active_cacheline_set_overlap(phys_addr_t cln= , int overlap) return overlap; } =20 -static void active_cacheline_inc_overlap(phys_addr_t cln) +static void active_cacheline_inc_overlap(phys_addr_t cln, bool is_cache_cl= ean) { int overlap =3D active_cacheline_read_overlap(cln); =20 @@ -462,7 +462,7 @@ static void active_cacheline_inc_overlap(phys_addr_t cl= n) /* If we overflowed the overlap counter then we're potentially * leaking dma-mappings. */ - WARN_ONCE(overlap > ACTIVE_CACHELINE_MAX_OVERLAP, + WARN_ONCE(!is_cache_clean && overlap > ACTIVE_CACHELINE_MAX_OVERLAP, pr_fmt("exceeded %d overlapping mappings of cacheline %pa\n"), ACTIVE_CACHELINE_MAX_OVERLAP, &cln); } @@ -495,7 +495,7 @@ static int active_cacheline_insert(struct dma_debug_ent= ry *entry, if (rc =3D=3D -EEXIST) { struct dma_debug_entry *existing; =20 - active_cacheline_inc_overlap(cln); + active_cacheline_inc_overlap(cln, entry->is_cache_clean); existing =3D radix_tree_lookup(&dma_active_cacheline, cln); /* A lookup failure here after we got -EEXIST is unexpected. */ WARN_ON(!existing); --=20 2.53.0 From nobody Tue Apr 7 02:34:19 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2B9743ED5B4; Mon, 16 Mar 2026 19:07:16 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773688036; cv=none; b=iayDY6iz1AGqX8ZNihAF1vsPExq98Z9HuZi8GSgS7wjTYoZa6DM6r3Y2GrFlHqf8AxXJwCgm4LasjObkiMY0Kuxg7z6zSLjGiM8UInqCEmavtiAODqWSUHf+q39HGGcLaIsaSVbmBYvYn6npePrcSqSfiORyKVkegQBFmUOdf1U= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773688036; c=relaxed/simple; bh=Y7IxYgFPwpNVvKXeK9nR1uoDv1sEB/976av/0RuDIXA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=PDDpzLOmC5FxKWuja+bHwZ1xqgIhsJjsBIs+adnGrYE+xm6Ji9dBg2OvZE11tfEKOKhiJb3Z6P5OWt/Oo1AkgPsyc3Kdy+z+MySBdNZRZlGXjEIHxHgMPBcFWToVzSkAoXf8hTwPgd91qZZ3hQXJuZE1RNYSW02CZvf/nWgyfKs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=nQIpRZVj; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="nQIpRZVj" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 486BAC19421; Mon, 16 Mar 2026 19:07:15 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773688035; bh=Y7IxYgFPwpNVvKXeK9nR1uoDv1sEB/976av/0RuDIXA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=nQIpRZVjrncGcv6d61BCi0UIP3qGV+iU9yUSpZ6PURLBwRW7JYFCHFMYWeHe4GRIf 67NzuI4t53eJ9xpRRZFfeEOPe8Es5onXjA3FLT+wVEbm5aWunQunwM+Wo9mlmv6XCQ dBr7Hn4O/6PSBLZz3JYlNc+m6yvDDz4JU2hIZH2IXfM+TusARyzUz0q9S+VWzUShC3 oVxc482tMyBwhbuiiu2S/D5giAKlSCRt6DEryHmlrEhBu0hsfKZSE/9RgYFaICUi2p p8wemyHg4PTBEkUgVHopxGVm2DagRESjBhEIyJPoPhx7ObOrygks54kKAjC4oMGbv1 +Bc4Q+rFSZQ5g== From: Leon Romanovsky To: Marek Szyprowski , Robin Murphy , "Michael S. Tsirkin" , Petr Tesarik , Jonathan Corbet , Shuah Khan , Jason Wang , Xuan Zhuo , =?utf-8?q?Eugenio_P=C3=A9rez?= , Jason Gunthorpe , Leon Romanovsky , Steven Rostedt , Masami Hiramatsu , Mathieu Desnoyers , Joerg Roedel , Will Deacon , Andrew Morton Cc: iommu@lists.linux.dev, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, virtualization@lists.linux.dev, linux-rdma@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v3 2/8] dma-mapping: handle DMA_ATTR_CPU_CACHE_CLEAN in trace output Date: Mon, 16 Mar 2026 21:06:46 +0200 Message-ID: <20260316-dma-debug-overlap-v3-2-1dde90a7f08b@nvidia.com> X-Mailer: git-send-email 2.53.0 In-Reply-To: <20260316-dma-debug-overlap-v3-0-1dde90a7f08b@nvidia.com> References: <20260316-dma-debug-overlap-v3-0-1dde90a7f08b@nvidia.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" X-Mailer: b4 0.15-dev-18f8f Content-Transfer-Encoding: quoted-printable From: Leon Romanovsky Tracing prints decoded DMA attribute flags, but it does not yet include the recently added DMA_ATTR_CPU_CACHE_CLEAN. Add support for decoding and displaying this attribute in the trace output. Fixes: 61868dc55a11 ("dma-mapping: add DMA_ATTR_CPU_CACHE_CLEAN") Signed-off-by: Leon Romanovsky --- include/trace/events/dma.h | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/include/trace/events/dma.h b/include/trace/events/dma.h index 33e99e792f1aa..69cb3805ee81c 100644 --- a/include/trace/events/dma.h +++ b/include/trace/events/dma.h @@ -32,7 +32,8 @@ TRACE_DEFINE_ENUM(DMA_NONE); { DMA_ATTR_ALLOC_SINGLE_PAGES, "ALLOC_SINGLE_PAGES" }, \ { DMA_ATTR_NO_WARN, "NO_WARN" }, \ { DMA_ATTR_PRIVILEGED, "PRIVILEGED" }, \ - { DMA_ATTR_MMIO, "MMIO" }) + { DMA_ATTR_MMIO, "MMIO" }, \ + { DMA_ATTR_CPU_CACHE_CLEAN, "CACHE_CLEAN" }) =20 DECLARE_EVENT_CLASS(dma_map, TP_PROTO(struct device *dev, phys_addr_t phys_addr, dma_addr_t dma_addr, --=20 2.53.0 From nobody Tue Apr 7 02:34:19 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 086563E717B; Mon, 16 Mar 2026 19:07:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773688046; cv=none; b=u35+0kXM8x/hxvf4sPSL/P8GGTD8gp5m9MjbRIsD8YCWiPOrpGKc0xiqtSxjeOFdhBI+X4rGKnoZV74sK+x7AUYQd9xxPqlgEQQtrEwc/37uVqCAKQcOOFQ65xYWBb2uEN80vQi3TGypm4eWIdp9vNR0PwLKt9f8QvAzUXlLFqs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773688046; c=relaxed/simple; bh=M+wL0ybsphtwzmN54IoSxHDbpmj04d4gwnhHVj/1g/U=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=fufVuZgsJnPNuqJLPxT9ZlkoYtS4dDUXEydpo0OZ6dY9ibStLEdlrsF3Qlv5DiiBhEXEDMtJS1jQPjJ2DNUMn/iyHpaKryUkN/y/S0qxUBT3xMZ5dYmrdCsmMr7gpUmsM7rMC7dmQOmyJOQD9aWUm+w8rojrpgpTXDUIaUiPDN4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=piNXWAT4; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="piNXWAT4" Received: by smtp.kernel.org (Postfix) with ESMTPSA id C3FB5C19421; Mon, 16 Mar 2026 19:07:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773688045; bh=M+wL0ybsphtwzmN54IoSxHDbpmj04d4gwnhHVj/1g/U=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=piNXWAT4bj5lHmsG6nZ0L9AfMzMyIu7j2YPi2hOtfCWMUcwEijjdxEepFlWw7a6fq WVLBOPlhICTEuRhZYSVM0D5V3rQ0oUf9AcB04xuR+8KAmo2jHx35vNPo8wRdM9hGSj N4y54lJWEpOjKWXIYdeYeURJx8IlG8n28lYz7s65YBXe4IbWpiBAI555h08euJPfha rVCCP9FsjuH2LmZ/DWXZtj4tZGwBH/oh3HZrRLc0HEFANm62GBtJi3cReOqJFmMDuF Ib/ySREM9Emz+Uly5rjb/07TARzrR0FuutOU8ynwypkMADPP1BFqB3INoz56VB3UlB rONpqoDT/xo2A== From: Leon Romanovsky To: Marek Szyprowski , Robin Murphy , "Michael S. Tsirkin" , Petr Tesarik , Jonathan Corbet , Shuah Khan , Jason Wang , Xuan Zhuo , =?utf-8?q?Eugenio_P=C3=A9rez?= , Jason Gunthorpe , Leon Romanovsky , Steven Rostedt , Masami Hiramatsu , Mathieu Desnoyers , Joerg Roedel , Will Deacon , Andrew Morton Cc: iommu@lists.linux.dev, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, virtualization@lists.linux.dev, linux-rdma@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v3 3/8] dma-mapping: Clarify valid conditions for CPU cache line overlap Date: Mon, 16 Mar 2026 21:06:47 +0200 Message-ID: <20260316-dma-debug-overlap-v3-3-1dde90a7f08b@nvidia.com> X-Mailer: git-send-email 2.53.0 In-Reply-To: <20260316-dma-debug-overlap-v3-0-1dde90a7f08b@nvidia.com> References: <20260316-dma-debug-overlap-v3-0-1dde90a7f08b@nvidia.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" X-Mailer: b4 0.15-dev-18f8f Content-Transfer-Encoding: quoted-printable From: Leon Romanovsky Rename the DMA_ATTR_CPU_CACHE_CLEAN attribute to better reflect that it is debugging aid to inform DMA core code that CPU cache line overlaps are allowed, and refine the documentation describing its use. Signed-off-by: Leon Romanovsky --- Documentation/core-api/dma-attributes.rst | 22 ++++++++++++++-------- drivers/virtio/virtio_ring.c | 10 +++++----- include/linux/dma-mapping.h | 8 ++++---- include/trace/events/dma.h | 2 +- kernel/dma/debug.c | 2 +- 5 files changed, 25 insertions(+), 19 deletions(-) diff --git a/Documentation/core-api/dma-attributes.rst b/Documentation/core= -api/dma-attributes.rst index 1d7bfad73b1c7..48cfe86cc06d7 100644 --- a/Documentation/core-api/dma-attributes.rst +++ b/Documentation/core-api/dma-attributes.rst @@ -149,11 +149,17 @@ For architectures that require cache flushing for DMA= coherence DMA_ATTR_MMIO will not perform any cache flushing. The address provided must never be mapped cacheable into the CPU. =20 -DMA_ATTR_CPU_CACHE_CLEAN ------------------------- - -This attribute indicates the CPU will not dirty any cacheline overlapping = this -DMA_FROM_DEVICE/DMA_BIDIRECTIONAL buffer while it is mapped. This allows -multiple small buffers to safely share a cacheline without risk of data -corruption, suppressing DMA debug warnings about overlapping mappings. -All mappings sharing a cacheline should have this attribute. +DMA_ATTR_DEBUGGING_IGNORE_CACHELINES +------------------------------------ + +This attribute indicates that CPU cache lines may overlap for buffers mapp= ed +with DMA_FROM_DEVICE or DMA_BIDIRECTIONAL. + +Such overlap may occur when callers map multiple small buffers that reside +within the same cache line. In this case, callers must guarantee that the = CPU +will not dirty these cache lines after the mappings are established. When = this +condition is met, multiple buffers can safely share a cache line without r= isking +data corruption. + +All mappings that share a cache line must set this attribute to suppress D= MA +debug warnings about overlapping mappings. diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c index 335692d41617a..fbca7ce1c6bf0 100644 --- a/drivers/virtio/virtio_ring.c +++ b/drivers/virtio/virtio_ring.c @@ -2912,10 +2912,10 @@ EXPORT_SYMBOL_GPL(virtqueue_add_inbuf); * @data: the token identifying the buffer. * @gfp: how to do memory allocations (if necessary). * - * Same as virtqueue_add_inbuf but passes DMA_ATTR_CPU_CACHE_CLEAN to indi= cate - * that the CPU will not dirty any cacheline overlapping this buffer while= it - * is available, and to suppress overlapping cacheline warnings in DMA deb= ug - * builds. + * Same as virtqueue_add_inbuf but passes DMA_ATTR_DEBUGGING_IGNORE_CACHEL= INES + * to indicate that the CPU will not dirty any cacheline overlapping this = buffer + * while it is available, and to suppress overlapping cacheline warnings i= n DMA + * debug builds. * * Caller must ensure we don't call this with other virtqueue operations * at the same time (except where noted). @@ -2928,7 +2928,7 @@ int virtqueue_add_inbuf_cache_clean(struct virtqueue = *vq, gfp_t gfp) { return virtqueue_add(vq, &sg, num, 0, 1, data, NULL, false, gfp, - DMA_ATTR_CPU_CACHE_CLEAN); + DMA_ATTR_DEBUGGING_IGNORE_CACHELINES); } EXPORT_SYMBOL_GPL(virtqueue_add_inbuf_cache_clean); =20 diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h index 29973baa05816..da44394b3a1a7 100644 --- a/include/linux/dma-mapping.h +++ b/include/linux/dma-mapping.h @@ -80,11 +80,11 @@ #define DMA_ATTR_MMIO (1UL << 10) =20 /* - * DMA_ATTR_CPU_CACHE_CLEAN: Indicates the CPU will not dirty any cacheline - * overlapping this buffer while it is mapped for DMA. All mappings sharing - * a cacheline must have this attribute for this to be considered safe. + * DMA_ATTR_DEBUGGING_IGNORE_CACHELINES: Indicates the CPU cache line can = be + * overlapped. All mappings sharing a cacheline must have this attribute f= or + * this to be considered safe. */ -#define DMA_ATTR_CPU_CACHE_CLEAN (1UL << 11) +#define DMA_ATTR_DEBUGGING_IGNORE_CACHELINES (1UL << 11) =20 /* * A dma_addr_t can hold any valid DMA or bus address for the platform. I= t can diff --git a/include/trace/events/dma.h b/include/trace/events/dma.h index 69cb3805ee81c..8c64bc0721fe4 100644 --- a/include/trace/events/dma.h +++ b/include/trace/events/dma.h @@ -33,7 +33,7 @@ TRACE_DEFINE_ENUM(DMA_NONE); { DMA_ATTR_NO_WARN, "NO_WARN" }, \ { DMA_ATTR_PRIVILEGED, "PRIVILEGED" }, \ { DMA_ATTR_MMIO, "MMIO" }, \ - { DMA_ATTR_CPU_CACHE_CLEAN, "CACHE_CLEAN" }) + { DMA_ATTR_DEBUGGING_IGNORE_CACHELINES, "CACHELINES_OVERLAP" }) =20 DECLARE_EVENT_CLASS(dma_map, TP_PROTO(struct device *dev, phys_addr_t phys_addr, dma_addr_t dma_addr, diff --git a/kernel/dma/debug.c b/kernel/dma/debug.c index be207be749968..83e1cfe05f08d 100644 --- a/kernel/dma/debug.c +++ b/kernel/dma/debug.c @@ -601,7 +601,7 @@ static void add_dma_entry(struct dma_debug_entry *entry= , unsigned long attrs) unsigned long flags; int rc; =20 - entry->is_cache_clean =3D !!(attrs & DMA_ATTR_CPU_CACHE_CLEAN); + entry->is_cache_clean =3D attrs & DMA_ATTR_DEBUGGING_IGNORE_CACHELINES; =20 bucket =3D get_hash_bucket(entry, &flags); hash_bucket_add(bucket, entry); --=20 2.53.0 From nobody Tue Apr 7 02:34:19 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 583BF3ECBDA; Mon, 16 Mar 2026 19:07:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773688041; cv=none; b=P+d786AQgNmI6TjPB65ObCNgBxvC1dLMOd74Oj53RkEDKpPpPiVlbRVKMf3gXF6dv1udlpSzdK9zvB0atdYpYhpMjm0AOKp9LqqKdggZwyhL8EKjRwt1P4TRVuyOzWrV5pM7G9bcyTLcXgtBMRJHPoPI4ON2McZpZo2Elb3J17Y= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773688041; c=relaxed/simple; bh=76gJ8zRLQ2z9cpyLADgS8FI7m4LyL7iZ3qybe4s2hx0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=QR9wICkusPuzeTltZD1mAQww4sxXsTpePw5sRS0a8CujT50sYzyFX3mzMvMKaFGhwZexwroNufvXa3d1sODrUpuEx0ZBekh81o7FvqJyd5p+T0rnEwwCbJ3+GCj5xP3JkliWynwJ/P1R3RcruLhX/U5n+RMuqpHRFqF5t6GQKQ0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=roSy1tgF; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="roSy1tgF" Received: by smtp.kernel.org (Postfix) with ESMTPSA id D4C27C2BC9E; Mon, 16 Mar 2026 19:07:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773688040; bh=76gJ8zRLQ2z9cpyLADgS8FI7m4LyL7iZ3qybe4s2hx0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=roSy1tgF1BK5GTonHvvuBaItSOG6jXvYZv3W1Gfs9mXXGN7ExZYrQqTEKyt6XI0j5 TmT+zflhlqF6YD11LHc/BpbZ1b2PDKk/mAZVW32pV4AS39Zryv7UDBVcUsTDuNA5MY HH+cEcm+XD1pXOoWv2vsYXziQO/qdfwvx2IK5jiXrTWF/YtpsPjEYIb0WtkhvDSjEy AZPGXcqtr+eICGbyYxOFTTTBGDxWj60I5P3ISA7kNUApZIWKxoJO4135CxZnUnecDL qJentQxiJkFXIuEpu+pejYL+/ugXKcQIJxFtQ/yzTGpic3wcXt6tx86awg8a33ITLP UrO+w0OVPriBQ== From: Leon Romanovsky To: Marek Szyprowski , Robin Murphy , "Michael S. Tsirkin" , Petr Tesarik , Jonathan Corbet , Shuah Khan , Jason Wang , Xuan Zhuo , =?utf-8?q?Eugenio_P=C3=A9rez?= , Jason Gunthorpe , Leon Romanovsky , Steven Rostedt , Masami Hiramatsu , Mathieu Desnoyers , Joerg Roedel , Will Deacon , Andrew Morton Cc: iommu@lists.linux.dev, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, virtualization@lists.linux.dev, linux-rdma@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v3 4/8] dma-mapping: Introduce DMA require coherency attribute Date: Mon, 16 Mar 2026 21:06:48 +0200 Message-ID: <20260316-dma-debug-overlap-v3-4-1dde90a7f08b@nvidia.com> X-Mailer: git-send-email 2.53.0 In-Reply-To: <20260316-dma-debug-overlap-v3-0-1dde90a7f08b@nvidia.com> References: <20260316-dma-debug-overlap-v3-0-1dde90a7f08b@nvidia.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" X-Mailer: b4 0.15-dev-18f8f Content-Transfer-Encoding: quoted-printable From: Leon Romanovsky The mapping buffers which carry this attribute require DMA coherent system. This means that they can't take SWIOTLB path, can perform CPU cache overlap and doesn't perform cache flushing. Signed-off-by: Leon Romanovsky --- Documentation/core-api/dma-attributes.rst | 16 ++++++++++++++++ include/linux/dma-mapping.h | 7 +++++++ include/trace/events/dma.h | 3 ++- kernel/dma/debug.c | 3 ++- kernel/dma/mapping.c | 6 ++++++ 5 files changed, 33 insertions(+), 2 deletions(-) diff --git a/Documentation/core-api/dma-attributes.rst b/Documentation/core= -api/dma-attributes.rst index 48cfe86cc06d7..441bdc9d08318 100644 --- a/Documentation/core-api/dma-attributes.rst +++ b/Documentation/core-api/dma-attributes.rst @@ -163,3 +163,19 @@ data corruption. =20 All mappings that share a cache line must set this attribute to suppress D= MA debug warnings about overlapping mappings. + +DMA_ATTR_REQUIRE_COHERENT +------------------------- + +DMA mapping requests with the DMA_ATTR_REQUIRE_COHERENT fail on any +system where SWIOTLB or cache management is required. This should only +be used to support uAPI designs that require continuous HW DMA +coherence with userspace processes, for example RDMA and DRM. At a +minimum the memory being mapped must be userspace memory from +pin_user_pages() or similar. + +Drivers should consider using dma_mmap_pages() instead of this +interface when building their uAPIs, when possible. + +It must never be used in an in-kernel driver that only works with +kernal memory. diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h index da44394b3a1a7..482b919f040f7 100644 --- a/include/linux/dma-mapping.h +++ b/include/linux/dma-mapping.h @@ -86,6 +86,13 @@ */ #define DMA_ATTR_DEBUGGING_IGNORE_CACHELINES (1UL << 11) =20 +/* + * DMA_ATTR_REQUIRE_COHERENT: Indicates that DMA coherency is required. + * All mappings that carry this attribute can't work with SWIOTLB and cache + * flushing. + */ +#define DMA_ATTR_REQUIRE_COHERENT (1UL << 12) + /* * A dma_addr_t can hold any valid DMA or bus address for the platform. I= t can * be given to a device to use as a DMA source or target. It is specific = to a diff --git a/include/trace/events/dma.h b/include/trace/events/dma.h index 8c64bc0721fe4..63597b0044247 100644 --- a/include/trace/events/dma.h +++ b/include/trace/events/dma.h @@ -33,7 +33,8 @@ TRACE_DEFINE_ENUM(DMA_NONE); { DMA_ATTR_NO_WARN, "NO_WARN" }, \ { DMA_ATTR_PRIVILEGED, "PRIVILEGED" }, \ { DMA_ATTR_MMIO, "MMIO" }, \ - { DMA_ATTR_DEBUGGING_IGNORE_CACHELINES, "CACHELINES_OVERLAP" }) + { DMA_ATTR_DEBUGGING_IGNORE_CACHELINES, "CACHELINES_OVERLAP" }, \ + { DMA_ATTR_REQUIRE_COHERENT, "REQUIRE_COHERENT" }) =20 DECLARE_EVENT_CLASS(dma_map, TP_PROTO(struct device *dev, phys_addr_t phys_addr, dma_addr_t dma_addr, diff --git a/kernel/dma/debug.c b/kernel/dma/debug.c index 83e1cfe05f08d..0677918f06a80 100644 --- a/kernel/dma/debug.c +++ b/kernel/dma/debug.c @@ -601,7 +601,8 @@ static void add_dma_entry(struct dma_debug_entry *entry= , unsigned long attrs) unsigned long flags; int rc; =20 - entry->is_cache_clean =3D attrs & DMA_ATTR_DEBUGGING_IGNORE_CACHELINES; + entry->is_cache_clean =3D attrs & (DMA_ATTR_DEBUGGING_IGNORE_CACHELINES | + DMA_ATTR_REQUIRE_COHERENT); =20 bucket =3D get_hash_bucket(entry, &flags); hash_bucket_add(bucket, entry); diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c index 3928a509c44c2..6d3dd0bd3a886 100644 --- a/kernel/dma/mapping.c +++ b/kernel/dma/mapping.c @@ -164,6 +164,9 @@ dma_addr_t dma_map_phys(struct device *dev, phys_addr_t= phys, size_t size, if (WARN_ON_ONCE(!dev->dma_mask)) return DMA_MAPPING_ERROR; =20 + if (!dev_is_dma_coherent(dev) && (attrs & DMA_ATTR_REQUIRE_COHERENT)) + return DMA_MAPPING_ERROR; + if (dma_map_direct(dev, ops) || (!is_mmio && arch_dma_map_phys_direct(dev, phys + size))) addr =3D dma_direct_map_phys(dev, phys, size, dir, attrs); @@ -235,6 +238,9 @@ static int __dma_map_sg_attrs(struct device *dev, struc= t scatterlist *sg, =20 BUG_ON(!valid_dma_direction(dir)); =20 + if (!dev_is_dma_coherent(dev) && (attrs & DMA_ATTR_REQUIRE_COHERENT)) + return -EOPNOTSUPP; + if (WARN_ON_ONCE(!dev->dma_mask)) return 0; =20 --=20 2.53.0 From nobody Tue Apr 7 02:34:19 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 76CD73EDABB; Mon, 16 Mar 2026 19:07:40 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773688060; cv=none; b=INKpSXWtDhy4+hDy8Aa2HaOvA7TJBXEfBR6YhpSOK20KringqoUAclDt955kmd/nqNJ4+JB6Nn4+kP0vXZ9GlHMTjyYd4okaWctTlrYSGx/GKVyw9h/98rygvlX+m5Bg3GShBy4zevt00WCVSnmOkuf8l8rJnL8I5fle+0fxdqI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773688060; c=relaxed/simple; bh=8pwuaCiDUSCBcuJ0n9V3xnzMZmgOorpLbuob290/9q4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=gQC0mVB3nO7A0B+jUtjyoPJMG8EjoYYF8O9XkjQU3oDuwWbP8sHgDKPk4J59BY18WFoQcfyw8Gr+c2lTDmP/6pMyQC4qT8LSeMfyYEuPZs6fBgqiOEkkVAlRNucs6IQDy0GZ17VSjY1FDXUnlJxeanvEf/j0OWlFiUa8BnMLTTY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=GCGCn64m; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="GCGCn64m" Received: by smtp.kernel.org (Postfix) with ESMTPSA id C17F4C2BC87; Mon, 16 Mar 2026 19:07:38 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773688059; bh=8pwuaCiDUSCBcuJ0n9V3xnzMZmgOorpLbuob290/9q4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=GCGCn64m17HUEqXK/h5IorRImzMcDPJU0tCmJDbB3L4vZKoJ0IPFeBCRy+wsuVOsq QdLTqB+gMICogw/16bSnE5fKFrNacTDn0dgvpd9diHLoKz+MPcFcA3Moa3f7VJeLjJ 1ej0Ud0wWhcmuYammtYQZHu2U6ujMP2SzDcIO/KtqJUu3zi2ahSbrZoMmCZZRQkJ0r G8Hmu6p0/RxKO0n5ZlwCOD7NB007g+yepdwOdggQWqPLaCSyWzx36KVZ7tV6Qx3o/c VUo3XtlNgXCcmPathZVid/O+mA8txRCpS7WCOQ9I3ZLhHxZyH7AYAGU+Kep7Jz77mP /gXA1Yq4XBbOA== From: Leon Romanovsky To: Marek Szyprowski , Robin Murphy , "Michael S. Tsirkin" , Petr Tesarik , Jonathan Corbet , Shuah Khan , Jason Wang , Xuan Zhuo , =?utf-8?q?Eugenio_P=C3=A9rez?= , Jason Gunthorpe , Leon Romanovsky , Steven Rostedt , Masami Hiramatsu , Mathieu Desnoyers , Joerg Roedel , Will Deacon , Andrew Morton Cc: iommu@lists.linux.dev, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, virtualization@lists.linux.dev, linux-rdma@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v3 5/8] dma-direct: prevent SWIOTLB path when DMA_ATTR_REQUIRE_COHERENT is set Date: Mon, 16 Mar 2026 21:06:49 +0200 Message-ID: <20260316-dma-debug-overlap-v3-5-1dde90a7f08b@nvidia.com> X-Mailer: git-send-email 2.53.0 In-Reply-To: <20260316-dma-debug-overlap-v3-0-1dde90a7f08b@nvidia.com> References: <20260316-dma-debug-overlap-v3-0-1dde90a7f08b@nvidia.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" X-Mailer: b4 0.15-dev-18f8f Content-Transfer-Encoding: quoted-printable From: Leon Romanovsky DMA_ATTR_REQUIRE_COHERENT indicates that SWIOTLB must not be used. Ensure the SWIOTLB path is declined whenever the DMA direct path is selected. Signed-off-by: Leon Romanovsky --- kernel/dma/direct.h | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/kernel/dma/direct.h b/kernel/dma/direct.h index e89f175e9c2d0..6184ff303f080 100644 --- a/kernel/dma/direct.h +++ b/kernel/dma/direct.h @@ -84,7 +84,7 @@ static inline dma_addr_t dma_direct_map_phys(struct devic= e *dev, dma_addr_t dma_addr; =20 if (is_swiotlb_force_bounce(dev)) { - if (attrs & DMA_ATTR_MMIO) + if (attrs & (DMA_ATTR_MMIO | DMA_ATTR_REQUIRE_COHERENT)) return DMA_MAPPING_ERROR; =20 return swiotlb_map(dev, phys, size, dir, attrs); @@ -98,7 +98,8 @@ static inline dma_addr_t dma_direct_map_phys(struct devic= e *dev, dma_addr =3D phys_to_dma(dev, phys); if (unlikely(!dma_capable(dev, dma_addr, size, true)) || dma_kmalloc_needs_bounce(dev, size, dir)) { - if (is_swiotlb_active(dev)) + if (is_swiotlb_active(dev) && + !(attrs & DMA_ATTR_REQUIRE_COHERENT)) return swiotlb_map(dev, phys, size, dir, attrs); =20 goto err_overflow; @@ -123,7 +124,7 @@ static inline void dma_direct_unmap_phys(struct device = *dev, dma_addr_t addr, { phys_addr_t phys; =20 - if (attrs & DMA_ATTR_MMIO) + if (attrs & (DMA_ATTR_MMIO | DMA_ATTR_REQUIRE_COHERENT)) /* nothing to do: uncached and no swiotlb */ return; =20 --=20 2.53.0 From nobody Tue Apr 7 02:34:19 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9F9E03EF0B8; Mon, 16 Mar 2026 19:07:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773688050; cv=none; b=CGsoYKYlABHtXx32Sz1w2Q9eevxEH8qENdNViN2iI+hEyirvp1a4iJQR1tRQ72F+9DfG3F0T8WVHlujQXTT12aSqYqKb9xOleahEat8iatYGwKV1iA3Hb47k13AXvNK3T/TibC4i5nxVaTKLzAjiwhL/WAFvLt0kS/NkdYUGp9w= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773688050; c=relaxed/simple; bh=aXwCdr26DGL4+NaZEE3PT+YDkt3E2yVN7za/ReeSxow=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=a84LPs3i3nZ3lyq2832cfJ1ZPeiSUJy6UiBlrAKWAhnT9gkk9d95tCIOpw7Y9xJjx0TrcqrYIPGa3cznUkuz620FC0eBMO0isxVAvDcGdDSEb/GDVVAVSUyVj/rSb2As7NtD1Glu9rq9Z9xiDCBkfAqugeQiT5dvBnhLmy2R7go= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=qShwIFNF; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="qShwIFNF" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 05BF4C19421; Mon, 16 Mar 2026 19:07:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773688050; bh=aXwCdr26DGL4+NaZEE3PT+YDkt3E2yVN7za/ReeSxow=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=qShwIFNFGuszc3wiBhUM/dXxzghLUfqSxNk6dSgejFCa16sP7mJ0YnF6nuboDJQ0I V1hdbvS1Cw4Q8UMNmIyZewdSbxL6PjFsKqY3LNx7tSYGdDqKW1taN6l2fo9mdtgC8y iHvFgLqcBiNy7tYnGM7quwPT+cd6RXOSvqXFQIO3R0aHDLowDG6fgtIPenYBLlYyQi 8npPn1RHBtah/rBZsE9wx+ixSFiZXVoxr3c+0eLM5qx/BZ1L86QMlxbXrzlzw+5VLo owZUj8Y7VbcTo2sOQlqSsHYt9BPLxuC7yd10mWvitREI5onnc8psViY7rBblB38Pad i7l8YW5XaRDUA== From: Leon Romanovsky To: Marek Szyprowski , Robin Murphy , "Michael S. Tsirkin" , Petr Tesarik , Jonathan Corbet , Shuah Khan , Jason Wang , Xuan Zhuo , =?utf-8?q?Eugenio_P=C3=A9rez?= , Jason Gunthorpe , Leon Romanovsky , Steven Rostedt , Masami Hiramatsu , Mathieu Desnoyers , Joerg Roedel , Will Deacon , Andrew Morton Cc: iommu@lists.linux.dev, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, virtualization@lists.linux.dev, linux-rdma@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH v3 6/8] iommu/dma: add support for DMA_ATTR_REQUIRE_COHERENT attribute Date: Mon, 16 Mar 2026 21:06:50 +0200 Message-ID: <20260316-dma-debug-overlap-v3-6-1dde90a7f08b@nvidia.com> X-Mailer: git-send-email 2.53.0 In-Reply-To: <20260316-dma-debug-overlap-v3-0-1dde90a7f08b@nvidia.com> References: <20260316-dma-debug-overlap-v3-0-1dde90a7f08b@nvidia.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" X-Mailer: b4 0.15-dev-18f8f Content-Transfer-Encoding: quoted-printable From: Leon Romanovsky Add support for the DMA_ATTR_REQUIRE_COHERENT attribute to the exported functions. This attribute indicates that the SWIOTLB path must not be used and that no sync operations should be performed. Signed-off-by: Leon Romanovsky --- drivers/iommu/dma-iommu.c | 21 +++++++++++++++++---- 1 file changed, 17 insertions(+), 4 deletions(-) diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index 5dac64be61bb2..94d5141696424 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -1211,7 +1211,7 @@ dma_addr_t iommu_dma_map_phys(struct device *dev, phy= s_addr_t phys, size_t size, */ if (dev_use_swiotlb(dev, size, dir) && iova_unaligned(iovad, phys, size)) { - if (attrs & DMA_ATTR_MMIO) + if (attrs & (DMA_ATTR_MMIO | DMA_ATTR_REQUIRE_COHERENT)) return DMA_MAPPING_ERROR; =20 phys =3D iommu_dma_map_swiotlb(dev, phys, size, dir, attrs); @@ -1223,7 +1223,8 @@ dma_addr_t iommu_dma_map_phys(struct device *dev, phy= s_addr_t phys, size_t size, arch_sync_dma_for_device(phys, size, dir); =20 iova =3D __iommu_dma_map(dev, phys, size, prot, dma_mask); - if (iova =3D=3D DMA_MAPPING_ERROR && !(attrs & DMA_ATTR_MMIO)) + if (iova =3D=3D DMA_MAPPING_ERROR && + !(attrs & (DMA_ATTR_MMIO | DMA_ATTR_REQUIRE_COHERENT))) swiotlb_tbl_unmap_single(dev, phys, size, dir, attrs); return iova; } @@ -1233,7 +1234,7 @@ void iommu_dma_unmap_phys(struct device *dev, dma_add= r_t dma_handle, { phys_addr_t phys; =20 - if (attrs & DMA_ATTR_MMIO) { + if (attrs & (DMA_ATTR_MMIO | DMA_ATTR_REQUIRE_COHERENT)) { __iommu_dma_unmap(dev, dma_handle, size); return; } @@ -1945,9 +1946,21 @@ int dma_iova_link(struct device *dev, struct dma_iov= a_state *state, if (WARN_ON_ONCE(iova_start_pad && offset > 0)) return -EIO; =20 + /* + * DMA_IOVA_USE_SWIOTLB is set on state after some entry + * took SWIOTLB path, which we were supposed to prevent + * for DMA_ATTR_REQUIRE_COHERENT attribute. + */ + if (WARN_ON_ONCE((state->__size & DMA_IOVA_USE_SWIOTLB) && + (attrs & DMA_ATTR_REQUIRE_COHERENT))) + return -EOPNOTSUPP; + + if (!dev_is_dma_coherent(dev) && (attrs & DMA_ATTR_REQUIRE_COHERENT)) + return -EOPNOTSUPP; + if (dev_use_swiotlb(dev, size, dir) && iova_unaligned(iovad, phys, size)) { - if (attrs & DMA_ATTR_MMIO) + if (attrs & (DMA_ATTR_MMIO | DMA_ATTR_REQUIRE_COHERENT)) return -EPERM; =20 return iommu_dma_iova_link_swiotlb(dev, state, phys, offset, --=20 2.53.0 From nobody Tue Apr 7 02:34:19 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8C3723E9F70; Mon, 16 Mar 2026 19:07:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773688055; cv=none; b=Ao1sB8JbZwdvVag1HYA31i4TRwQyLloJ4itCDRAeYiD8hDGNnCs6cwjJK/xSUOgkcZJhOqC732Xp9oBAwgF4mx3KbAUgaVjUBuKVBGf3q8j8XzWBMB810Ashu7vdYTVWIHrJi3KnvCVXSjzVD69PqA5qCavdn2cs2eiPQIuAKeo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773688055; c=relaxed/simple; bh=ZkJ7sb914J7dZHEWxzQioljT/jpfozBIBOswC8fIMs0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=j4pjGvgnvcN7Jcdw501pyiF9FQmA/k0G+Gr5pAba6XtrsnnUDXNtOYc2uY+Mt1TZt1XPV0FIxfjD7ZerTpuP1uJPmkIJ2XKS9xmy4trA8o/exr4+6QJb7OzwW5Zno498V61FgPm5LeuFR+s3RAaxnrcKTlg1S34KkH4NQUbZJZc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=NNO3FIgk; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="NNO3FIgk" Received: by smtp.kernel.org (Postfix) with ESMTPSA id CD275C19421; Mon, 16 Mar 2026 19:07:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773688054; bh=ZkJ7sb914J7dZHEWxzQioljT/jpfozBIBOswC8fIMs0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=NNO3FIgkhb+y967ZSuAWqgsqOZQE2Kc/rsldvSc0SwyJwpwu2bCx1PvpILU/J86hp NfBzEZXDmV98wlv/ffIQgdwhHztXFTt+g3U0RLvjGkhN+f4BsYd28fLZex1CrA1XlB CCTDuy2I1iL8QNuQJ90dXq2xy6yrOshrVwaRD0vTwRSHSH5OqDAZ1qxFYSxllkvzmr f3bnxtz9wcGL6ZCf43uZYzcV9g6RGy1QlTvNxYXF5UxbgQFvO561djTIiriMsUvpzy 6crqilxWMX46V0c6IBlZp8AO0wfKa3LPom6Cejyi73BlCsEY1APtnWtUcxhj7jhv4O qbxlfvx/E2HbQ== From: Leon Romanovsky To: Marek Szyprowski , Robin Murphy , "Michael S. Tsirkin" , Petr Tesarik , Jonathan Corbet , Shuah Khan , Jason Wang , Xuan Zhuo , =?utf-8?q?Eugenio_P=C3=A9rez?= , Jason Gunthorpe , Leon Romanovsky , Steven Rostedt , Masami Hiramatsu , Mathieu Desnoyers , Joerg Roedel , Will Deacon , Andrew Morton Cc: iommu@lists.linux.dev, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, virtualization@lists.linux.dev, linux-rdma@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linux-mm@kvack.org, Jason Gunthorpe Subject: [PATCH v3 7/8] RDMA/umem: Tell DMA mapping that UMEM requires coherency Date: Mon, 16 Mar 2026 21:06:51 +0200 Message-ID: <20260316-dma-debug-overlap-v3-7-1dde90a7f08b@nvidia.com> X-Mailer: git-send-email 2.53.0 In-Reply-To: <20260316-dma-debug-overlap-v3-0-1dde90a7f08b@nvidia.com> References: <20260316-dma-debug-overlap-v3-0-1dde90a7f08b@nvidia.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" X-Mailer: b4 0.15-dev-18f8f Content-Transfer-Encoding: quoted-printable From: Leon Romanovsky The RDMA subsystem exposes DMA regions through the verbs interface, which assumes a coherent system. Use the DMA_ATTR_REQUIRE_COHERENCE attribute to ensure coherency and avoid taking the SWIOTLB path. The RDMA verbs programming model resembles HMM and assumes concurrent DMA and CPU access to userspace memory. The hardware and programming model support "one-sided" operations initiated remotely without any local CPU involvement or notification. These include ATOMIC compare/swap, READ, and WRITE. A remote CPU can use these operations to traverse data structures, manipulate locks, and perform similar tasks without the host CPU=E2=80=99s awareness. If SWIOTLB substitutes memory or DMA is not cache coherent, these use cases break entirely. In-kernel RDMA is fine with incoherent mappings because kernel users do not rely on one-sided operations in ways that would expose these issues. A given region may also be exported multiple times, which can trigger warnings about cacheline overlaps. These warnings are suppressed when the new attribute is used. infiniband rocep8s0f0: mlx5_ib_reg_user_mr:1592:(pid 5812): start 0x2b28c00= 0, iova 0x2b28c000, length 0x1000, access_flags 0x1 infiniband rocep8s0f0: mlx5_ib_reg_user_mr:1592:(pid 5812): start 0x2b28c00= 1, iova 0x2b28c001, length 0xfff, access_flags 0x1 ------------[ cut here ]------------ DMA-API: mlx5_core 0000:08:00.0: cacheline tracking EEXIST, overlapping ma= ppings aren't supported WARNING: kernel/dma/debug.c:620 at add_dma_entry+0x1bb/0x280, CPU#6: ibv_r= c_pingpong/5812 Modules linked in: veth xt_conntrack xt_MASQUERADE nf_conntrack_netlink nf= netlink iptable_nat nf_nat xt_addrtype br_netfilter rpcsec_gss_krb5 auth_rp= cgss oid_registry overlay mlx5_fwctl zram zsmalloc mlx5_ib fuse rpcrdma rdm= a_ucm ib_uverbs ib_iser libiscsi scsi_transport_iscsi ib_umad rdma_cm ib_ip= oib iw_cm ib_cm mlx5_core ib_core CPU: 6 UID: 2733 PID: 5812 Comm: ibv_rc_pingpong Tainted: G W = 6.19.0+ #129 PREEMPT Tainted: [W]=3DWARN Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS rel-1.13.0-0-gf21= b5a4aeb02-prebuilt.qemu.org 04/01/2014 RIP: 0010:add_dma_entry+0x1be/0x280 Code: 8b 7b 10 48 85 ff 0f 84 c3 00 00 00 48 8b 6f 50 48 85 ed 75 03 48 8b= 2f e8 ff 8e 6a 00 48 89 c6 48 8d 3d 55 ef 2d 01 48 89 ea <67> 48 0f b9 3a = 48 85 db 74 1a 48 c7 c7 b0 00 2b 82 e8 9c 25 fd ff RSP: 0018:ff11000138717978 EFLAGS: 00010286 RAX: ffffffffa02d7831 RBX: ff1100010246de00 RCX: 0000000000000000 RDX: ff110001036fac30 RSI: ffffffffa02d7831 RDI: ffffffff82678650 RBP: ff110001036fac30 R08: ff11000110dcb4a0 R09: ff11000110dcb478 R10: 0000000000000000 R11: ffffffff824b30a8 R12: 0000000000000000 R13: 00000000ffffffef R14: 0000000000000202 R15: ff1100010246de00 FS: 00007f59b411c740(0000) GS:ff110008dcc99000(0000) knlGS:00000000000000= 00 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 00007ffe538f7000 CR3: 000000010e066005 CR4: 0000000000373eb0 Call Trace: debug_dma_map_sg+0x1b4/0x390 __dma_map_sg_attrs+0x6d/0x1a0 dma_map_sgtable+0x19/0x30 ib_umem_get+0x254/0x380 [ib_uverbs] mlx5_ib_reg_user_mr+0x68/0x2a0 [mlx5_ib] ib_uverbs_reg_mr+0x17f/0x2a0 [ib_uverbs] ib_uverbs_handler_UVERBS_METHOD_INVOKE_WRITE+0xc2/0x130 [ib_uverbs] ib_uverbs_cmd_verbs+0xa0b/0xae0 [ib_uverbs] ? ib_uverbs_handler_UVERBS_METHOD_QUERY_PORT_SPEED+0xe0/0xe0 [ib_uverbs] ? mmap_region+0x7a/0xb0 ? do_mmap+0x3b8/0x5c0 ib_uverbs_ioctl+0xa7/0x110 [ib_uverbs] __x64_sys_ioctl+0x14f/0x8b0 ? ksys_mmap_pgoff+0xc5/0x190 do_syscall_64+0x8c/0xbf0 entry_SYSCALL_64_after_hwframe+0x4b/0x53 RIP: 0033:0x7f59b430aeed Code: 04 25 28 00 00 00 48 89 45 c8 31 c0 48 8d 45 10 c7 45 b0 10 00 00 00= 48 89 45 b8 48 8d 45 d0 48 89 45 c0 b8 10 00 00 00 0f 05 <89> c2 3d 00 f0 = ff ff 77 1a 48 8b 45 c8 64 48 2b 04 25 28 00 00 00 RSP: 002b:00007ffe538f9430 EFLAGS: 00000246 ORIG_RAX: 0000000000000010 RAX: ffffffffffffffda RBX: 00007ffe538f94c0 RCX: 00007f59b430aeed RDX: 00007ffe538f94e0 RSI: 00000000c0181b01 RDI: 0000000000000003 RBP: 00007ffe538f9480 R08: 0000000000000028 R09: 00007ffe538f9684 R10: 0000000000000001 R11: 0000000000000246 R12: 00007ffe538f9684 R13: 000000000000000c R14: 000000002b28d170 R15: 000000000000000c ---[ end trace 0000000000000000 ]--- Reviewed-by: Jason Gunthorpe Signed-off-by: Leon Romanovsky --- drivers/infiniband/core/umem.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/drivers/infiniband/core/umem.c b/drivers/infiniband/core/umem.c index cff4fcca2c345..edc34c69f0f23 100644 --- a/drivers/infiniband/core/umem.c +++ b/drivers/infiniband/core/umem.c @@ -55,7 +55,8 @@ static void __ib_umem_release(struct ib_device *dev, stru= ct ib_umem *umem, int d =20 if (dirty) ib_dma_unmap_sgtable_attrs(dev, &umem->sgt_append.sgt, - DMA_BIDIRECTIONAL, 0); + DMA_BIDIRECTIONAL, + DMA_ATTR_REQUIRE_COHERENT); =20 for_each_sgtable_sg(&umem->sgt_append.sgt, sg, i) { unpin_user_page_range_dirty_lock(sg_page(sg), @@ -169,7 +170,7 @@ struct ib_umem *ib_umem_get(struct ib_device *device, u= nsigned long addr, unsigned long lock_limit; unsigned long new_pinned; unsigned long cur_base; - unsigned long dma_attr =3D 0; + unsigned long dma_attr =3D DMA_ATTR_REQUIRE_COHERENT; struct mm_struct *mm; unsigned long npages; int pinned, ret; --=20 2.53.0 From nobody Tue Apr 7 02:34:19 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1C6353EFD21; Mon, 16 Mar 2026 19:07:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773688065; cv=none; b=Dd6hKr6mKa4n3NQpDuXPdjfVPP0s4K3PTpf/XnXT6KCT4r5MwElEftmNDXjK4pd0R8LFJ9hckQ94DV4Mb4wyJspaHVAW9tEQKOkwTl/nW48S05s6bXgCoKFaUL1aNzfQ4ghta7N8o3lyt7VDQJmWTQDDrIxI9xQi/5/JWgcns/A= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773688065; c=relaxed/simple; bh=ZPv8i4FB+9gmQ6im9/gIYrUtjuhhh3DQksE2tI3ZIHY=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=H2XqLaIt3dVo7b6WOCxDy7fTtxpsX+ANLwXudm+Ty0HSkktz+YN0WlYTpcgq+8Th4pfPkC13oDU/m2AGUL8d3NKhjFvEYJr4xf2s2CB+tCpdk6S2lNOPmgua54Wgs968FrLDy+mUl7D+bC6KS69SIM9c9a9gVRJzcRXzMY28rdw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=DRyYIodO; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="DRyYIodO" Received: by smtp.kernel.org (Postfix) with ESMTPSA id B0505C2BC9E; Mon, 16 Mar 2026 19:07:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773688064; bh=ZPv8i4FB+9gmQ6im9/gIYrUtjuhhh3DQksE2tI3ZIHY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=DRyYIodOfdcUjLcCDu/jidDUh2A0CrnVqmtrZoS38nnxbv1K9xKH4U2nERAxw5Dnr xpZMTzOwGgNefLLEJdu7kIACNybDBIMeL2gjYa/WsKkEztHVvNTf3FGyuY67ZRhp+B VVAoPRKw7x3JJlrrtCVE+XCdZ9KBdWNwdoyr/19oYyN0h4MOOzpQgv0brQidAnBnWt IAidaFL/EszVCVmuUZfxp1MrjtA+RoOGA+j/sIW0az0Wpz08Zo769/4qrzdnIx+1mD unjW1nqdLdGReK4dBwDjXVamQtoCFyEea/CaTevOe/aCsHEI+wMr6SICzVMhes72VG AYqyNZR5FO0ow== From: Leon Romanovsky To: Marek Szyprowski , Robin Murphy , "Michael S. Tsirkin" , Petr Tesarik , Jonathan Corbet , Shuah Khan , Jason Wang , Xuan Zhuo , =?utf-8?q?Eugenio_P=C3=A9rez?= , Jason Gunthorpe , Leon Romanovsky , Steven Rostedt , Masami Hiramatsu , Mathieu Desnoyers , Joerg Roedel , Will Deacon , Andrew Morton Cc: iommu@lists.linux.dev, linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, virtualization@lists.linux.dev, linux-rdma@vger.kernel.org, linux-trace-kernel@vger.kernel.org, linux-mm@kvack.org, Jason Gunthorpe Subject: [PATCH v3 8/8] mm/hmm: Indicate that HMM requires DMA coherency Date: Mon, 16 Mar 2026 21:06:52 +0200 Message-ID: <20260316-dma-debug-overlap-v3-8-1dde90a7f08b@nvidia.com> X-Mailer: git-send-email 2.53.0 In-Reply-To: <20260316-dma-debug-overlap-v3-0-1dde90a7f08b@nvidia.com> References: <20260316-dma-debug-overlap-v3-0-1dde90a7f08b@nvidia.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" X-Mailer: b4 0.15-dev-18f8f Content-Transfer-Encoding: quoted-printable From: Leon Romanovsky HMM is fundamentally about allowing a sophisticated device to perform DMA directly to a process=E2=80=99s memory while the CPU accesses that same mem= ory at the same time. It is similar to SVA but does not rely on IOMMU support. Because the entire model depends on concurrent access to shared memory, it fails as a uAPI if SWIOTLB substitutes the memory or if the CPU caches are not coherent with DMA. Until now, there has been no reliable way to report this, and various approximations have been used: int hmm_dma_map_alloc(struct device *dev, struct hmm_dma_map *map, size_t nr_entries, size_t dma_entry_size) { <...> /* * The HMM API violates our normal DMA buffer ownership rules and c= an't * transfer buffer ownership. The dma_addressing_limited() check i= s a * best approximation to ensure no swiotlb buffering happens. */ dma_need_sync =3D !dev->dma_skip_sync; if (dma_need_sync || dma_addressing_limited(dev)) return -EOPNOTSUPP; So let's mark mapped buffers with DMA_ATTR_REQUIRE_COHERENT attribute to prevent silent data corruption if someone tries to use hmm in a system with swiotlb or incoherent DMA Reviewed-by: Jason Gunthorpe Signed-off-by: Leon Romanovsky --- mm/hmm.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/mm/hmm.c b/mm/hmm.c index f6c4ddff4bd61..5955f2f0c83db 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -778,7 +778,7 @@ dma_addr_t hmm_dma_map_pfn(struct device *dev, struct h= mm_dma_map *map, struct page *page =3D hmm_pfn_to_page(pfns[idx]); phys_addr_t paddr =3D hmm_pfn_to_phys(pfns[idx]); size_t offset =3D idx * map->dma_entry_size; - unsigned long attrs =3D 0; + unsigned long attrs =3D DMA_ATTR_REQUIRE_COHERENT; dma_addr_t dma_addr; int ret; =20 @@ -871,7 +871,7 @@ bool hmm_dma_unmap_pfn(struct device *dev, struct hmm_d= ma_map *map, size_t idx) struct dma_iova_state *state =3D &map->state; dma_addr_t *dma_addrs =3D map->dma_list; unsigned long *pfns =3D map->pfn_list; - unsigned long attrs =3D 0; + unsigned long attrs =3D DMA_ATTR_REQUIRE_COHERENT; =20 if ((pfns[idx] & valid_dma) !=3D valid_dma) return false; --=20 2.53.0