From nobody Wed Oct 8 17:29:55 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 68832261594; Wed, 25 Jun 2025 13:19:24 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750857564; cv=none; b=iiuwmxI3v5dVhYUYtbgUs2SrOx9mFu4bP2KqfB3wScyoFWgM6OBK8hv9YELodiw5AQ0x5X8b8O34jDBOkQaGazjnHoUT8Mi7snX0ydeaOfs0nRy5xwdGSOXYGkOOZl4o5CwGqOiUmVw8/JYRiZKYZ7UxXbExpEPOCgzVjNEfexo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750857564; c=relaxed/simple; bh=Zsk5pQlFejI1UObtet/1IyE15jbY3xD4oeihu5t0KTY=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=U/wNaQkWkzZOXmnKSMh5qUtSEgTvsSAzyWaZIA/EFewNfVC1Gaz+jxmtMaENeFxKOb+3EiUuG8OKLFdKY6M+5iO4UG91xwjQfQPGjBWt7SoymWSr+bDIcbCMaUU5LW0qzeQKfj6FY2JDMqfQQjmz6/MExkK5Ynz2HYfUzEE98TY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=OtDreryU; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="OtDreryU" Received: by smtp.kernel.org (Postfix) with ESMTPSA id E310EC4CEEA; Wed, 25 Jun 2025 13:19:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1750857564; bh=Zsk5pQlFejI1UObtet/1IyE15jbY3xD4oeihu5t0KTY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=OtDreryUbldawv8dCRYcKF7T+l0jwUU6qkhRhIobNj55X2ooVqjm9lkEuIKGqdKsX eF/oKJ7HqbYZpHqclwhA4ZA30Ca0t4QdNC0UKE/lf1ftijcSRjdY/GfpIHFOGJyZY+ HiBYzEUxJYxejR+Q/oyRCkQ92fxDReI/WRwnmyy64EF3NvI4leEkHR58sBpJ6v6KgY pNvO8cY8dvmGG/V3MOG9eU4BCWjSp5A9MFoJIpDLwH3YbKnJsYQDjwb8yF6N6DWDPq p++5ql2l/rH9PBF8Plua/cN2Ap/7bUzd7Su4zGypGFYEqoagvauzycVGHNyzokeTbq Y8uPp1/AAKM8A== From: Leon Romanovsky To: Marek Szyprowski Cc: Leon Romanovsky , Christoph Hellwig , Jonathan Corbet , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Robin Murphy , Joerg Roedel , Will Deacon , "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , =?UTF-8?q?Eugenio=20P=C3=A9rez?= , Alexander Potapenko , Marco Elver , Dmitry Vyukov , Masami Hiramatsu , Mathieu Desnoyers , =?UTF-8?q?J=C3=A9r=C3=B4me=20Glisse?= , Andrew Morton , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, iommu@lists.linux.dev, virtualization@lists.linux.dev, kasan-dev@googlegroups.com, linux-trace-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 1/8] dma-debug: refactor to use physical addresses for page mapping Date: Wed, 25 Jun 2025 16:18:58 +0300 Message-ID: <0390f8a813002e27d41bfb9c33041e699ffedf05.1750854543.git.leon@kernel.org> X-Mailer: git-send-email 2.49.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Leon Romanovsky Convert the DMA debug infrastructure from page-based to physical address-ba= sed mapping as a preparation to rely on physical address for DMA mapping routin= es. The refactoring renames debug_dma_map_page() to debug_dma_map_phys() and changes its signature to accept a phys_addr_t parameter instead of struct p= age and offset. Similarly, debug_dma_unmap_page() becomes debug_dma_unmap_phys(= ). A new dma_debug_phy type is introduced to distinguish physical address mapp= ings from other debug entry types. All callers throughout the codebase are updat= ed to pass physical addresses directly, eliminating the need for page-to-physi= cal conversion in the debug layer. This refactoring eliminates the need to convert between page pointers and physical addresses in the debug layer, making the code more efficient and consistent with the DMA mapping API's physical address focus. Signed-off-by: Leon Romanovsky --- Documentation/core-api/dma-api.rst | 4 ++-- kernel/dma/debug.c | 28 +++++++++++++++++----------- kernel/dma/debug.h | 16 +++++++--------- kernel/dma/mapping.c | 15 ++++++++------- 4 files changed, 34 insertions(+), 29 deletions(-) diff --git a/Documentation/core-api/dma-api.rst b/Documentation/core-api/dm= a-api.rst index 2ad08517e626..7491ee85ab25 100644 --- a/Documentation/core-api/dma-api.rst +++ b/Documentation/core-api/dma-api.rst @@ -816,7 +816,7 @@ example warning message may look like this:: [] find_busiest_group+0x207/0x8a0 [] _spin_lock_irqsave+0x1f/0x50 [] check_unmap+0x203/0x490 - [] debug_dma_unmap_page+0x49/0x50 + [] debug_dma_unmap_phys+0x49/0x50 [] nv_tx_done_optimized+0xc6/0x2c0 [] nv_nic_irq_optimized+0x73/0x2b0 [] handle_IRQ_event+0x34/0x70 @@ -910,7 +910,7 @@ that a driver may be leaking mappings. dma-debug interface debug_dma_mapping_error() to debug drivers that fail to check DMA mapping errors on addresses returned by dma_map_single() and dma_map_page() interfaces. This interface clears a flag set by -debug_dma_map_page() to indicate that dma_mapping_error() has been called = by +debug_dma_map_phys() to indicate that dma_mapping_error() has been called = by the driver. When driver does unmap, debug_dma_unmap() checks the flag and = if this flag is still set, prints warning message that includes call trace th= at leads up to the unmap. This interface can be called from dma_mapping_error= () diff --git a/kernel/dma/debug.c b/kernel/dma/debug.c index e43c6de2bce4..517dc58329e0 100644 --- a/kernel/dma/debug.c +++ b/kernel/dma/debug.c @@ -39,6 +39,7 @@ enum { dma_debug_sg, dma_debug_coherent, dma_debug_resource, + dma_debug_phy, }; =20 enum map_err_types { @@ -141,6 +142,7 @@ static const char *type2name[] =3D { [dma_debug_sg] =3D "scatter-gather", [dma_debug_coherent] =3D "coherent", [dma_debug_resource] =3D "resource", + [dma_debug_phy] =3D "phy", }; =20 static const char *dir2name[] =3D { @@ -1201,9 +1203,8 @@ void debug_dma_map_single(struct device *dev, const v= oid *addr, } EXPORT_SYMBOL(debug_dma_map_single); =20 -void debug_dma_map_page(struct device *dev, struct page *page, size_t offs= et, - size_t size, int direction, dma_addr_t dma_addr, - unsigned long attrs) +void debug_dma_map_phys(struct device *dev, phys_addr_t phys, size_t size, + int direction, dma_addr_t dma_addr, unsigned long attrs) { struct dma_debug_entry *entry; =20 @@ -1218,19 +1219,24 @@ void debug_dma_map_page(struct device *dev, struct = page *page, size_t offset, return; =20 entry->dev =3D dev; - entry->type =3D dma_debug_single; - entry->paddr =3D page_to_phys(page) + offset; + entry->type =3D dma_debug_phy; + entry->paddr =3D phys; entry->dev_addr =3D dma_addr; entry->size =3D size; entry->direction =3D direction; entry->map_err_type =3D MAP_ERR_NOT_CHECKED; =20 - check_for_stack(dev, page, offset); + if (pfn_valid(PHYS_PFN(phys))) { + struct page *page =3D phys_to_page(phys); + size_t offset =3D offset_in_page(page); =20 - if (!PageHighMem(page)) { - void *addr =3D page_address(page) + offset; + check_for_stack(dev, page, offset); =20 - check_for_illegal_area(dev, addr, size); + if (!PageHighMem(page)) { + void *addr =3D page_address(page) + offset; + + check_for_illegal_area(dev, addr, size); + } } =20 add_dma_entry(entry, attrs); @@ -1274,11 +1280,11 @@ void debug_dma_mapping_error(struct device *dev, dm= a_addr_t dma_addr) } EXPORT_SYMBOL(debug_dma_mapping_error); =20 -void debug_dma_unmap_page(struct device *dev, dma_addr_t dma_addr, +void debug_dma_unmap_phys(struct device *dev, dma_addr_t dma_addr, size_t size, int direction) { struct dma_debug_entry ref =3D { - .type =3D dma_debug_single, + .type =3D dma_debug_phy, .dev =3D dev, .dev_addr =3D dma_addr, .size =3D size, diff --git a/kernel/dma/debug.h b/kernel/dma/debug.h index f525197d3cae..76adb42bffd5 100644 --- a/kernel/dma/debug.h +++ b/kernel/dma/debug.h @@ -9,12 +9,11 @@ #define _KERNEL_DMA_DEBUG_H =20 #ifdef CONFIG_DMA_API_DEBUG -extern void debug_dma_map_page(struct device *dev, struct page *page, - size_t offset, size_t size, - int direction, dma_addr_t dma_addr, +extern void debug_dma_map_phys(struct device *dev, phys_addr_t phys, + size_t size, int direction, dma_addr_t dma_addr, unsigned long attrs); =20 -extern void debug_dma_unmap_page(struct device *dev, dma_addr_t addr, +extern void debug_dma_unmap_phys(struct device *dev, dma_addr_t addr, size_t size, int direction); =20 extern void debug_dma_map_sg(struct device *dev, struct scatterlist *sg, @@ -55,14 +54,13 @@ extern void debug_dma_sync_sg_for_device(struct device = *dev, struct scatterlist *sg, int nelems, int direction); #else /* CONFIG_DMA_API_DEBUG */ -static inline void debug_dma_map_page(struct device *dev, struct page *pag= e, - size_t offset, size_t size, - int direction, dma_addr_t dma_addr, - unsigned long attrs) +static inline void debug_dma_map_phys(struct device *dev, phys_addr_t phys, + size_t size, int direction, + dma_addr_t dma_addr, unsigned long attrs) { } =20 -static inline void debug_dma_unmap_page(struct device *dev, dma_addr_t add= r, +static inline void debug_dma_unmap_phys(struct device *dev, dma_addr_t add= r, size_t size, int direction) { } diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c index 107e4a4d251d..4c1dfbabb8ae 100644 --- a/kernel/dma/mapping.c +++ b/kernel/dma/mapping.c @@ -157,6 +157,7 @@ dma_addr_t dma_map_page_attrs(struct device *dev, struc= t page *page, unsigned long attrs) { const struct dma_map_ops *ops =3D get_dma_ops(dev); + phys_addr_t phys =3D page_to_phys(page) + offset; dma_addr_t addr; =20 BUG_ON(!valid_dma_direction(dir)); @@ -165,16 +166,15 @@ dma_addr_t dma_map_page_attrs(struct device *dev, str= uct page *page, return DMA_MAPPING_ERROR; =20 if (dma_map_direct(dev, ops) || - arch_dma_map_page_direct(dev, page_to_phys(page) + offset + size)) + arch_dma_map_page_direct(dev, phys + size)) addr =3D dma_direct_map_page(dev, page, offset, size, dir, attrs); else if (use_dma_iommu(dev)) addr =3D iommu_dma_map_page(dev, page, offset, size, dir, attrs); else addr =3D ops->map_page(dev, page, offset, size, dir, attrs); kmsan_handle_dma(page, offset, size, dir); - trace_dma_map_page(dev, page_to_phys(page) + offset, addr, size, dir, - attrs); - debug_dma_map_page(dev, page, offset, size, dir, addr, attrs); + trace_dma_map_page(dev, phys, addr, size, dir, attrs); + debug_dma_map_phys(dev, phys, size, dir, addr, attrs); =20 return addr; } @@ -194,7 +194,7 @@ void dma_unmap_page_attrs(struct device *dev, dma_addr_= t addr, size_t size, else ops->unmap_page(dev, addr, size, dir, attrs); trace_dma_unmap_page(dev, addr, size, dir, attrs); - debug_dma_unmap_page(dev, addr, size, dir); + debug_dma_unmap_phys(dev, addr, size, dir); } EXPORT_SYMBOL(dma_unmap_page_attrs); =20 @@ -712,7 +712,8 @@ struct page *dma_alloc_pages(struct device *dev, size_t= size, if (page) { trace_dma_alloc_pages(dev, page_to_virt(page), *dma_handle, size, dir, gfp, 0); - debug_dma_map_page(dev, page, 0, size, dir, *dma_handle, 0); + debug_dma_map_phys(dev, page_to_phys(page), size, dir, + *dma_handle, 0); } else { trace_dma_alloc_pages(dev, NULL, 0, size, dir, gfp, 0); } @@ -738,7 +739,7 @@ void dma_free_pages(struct device *dev, size_t size, st= ruct page *page, dma_addr_t dma_handle, enum dma_data_direction dir) { trace_dma_free_pages(dev, page_to_virt(page), dma_handle, size, dir, 0); - debug_dma_unmap_page(dev, dma_handle, size, dir); + debug_dma_unmap_phys(dev, dma_handle, size, dir); __dma_free_pages(dev, size, page, dma_handle, dir); } EXPORT_SYMBOL_GPL(dma_free_pages); --=20 2.49.0 From nobody Wed Oct 8 17:29:55 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D9501263F44; Wed, 25 Jun 2025 13:19:19 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750857560; cv=none; b=cuA5Gn/ZyTW0uO9bkJA99oHc/Y+XJklGa4GOXc2gRLlnmdDpPnsMCqeAuaBMQQ93gcGVJD9pW6c3SCsLd57xnvJfQsNqcEH03Ec4b9x7IYKHc/lhI92ITg7lYAPbiVop6+5UACNat8S/WYk8ppEJ31eyVSDnQOkFqe2WtaVmO7U= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750857560; c=relaxed/simple; bh=NIDOlrCq60G6a/TA90rMLwhsxxCVvA6QvqpyNPRE6dE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=olhPSPUHscUESeLU/3bIw4TJTAIrkjRSAFJkBYNiLZby5VLPDd1MDmdNV8qzRXYUf8n5C3xjILBBlJAFJKPlmNB0mJyHLRdQCxwv92lIXfw1GT0Zq/cE07m6EwOMgcvoctAbLTGJ5pNBDJYum6gYOKghKsKOenfPQyG13VeGzxo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=achChjnk; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="achChjnk" Received: by smtp.kernel.org (Postfix) with ESMTPSA id BC7D6C4CEEA; Wed, 25 Jun 2025 13:19:18 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1750857559; bh=NIDOlrCq60G6a/TA90rMLwhsxxCVvA6QvqpyNPRE6dE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=achChjnkCeieBOqf5CqoBu/vsW6+Iyvm5O0EDsXQ/4WO6NnY2BWu6KsOYMhHd/b3X ykPEPKLOIU8t7AcKOjSUpOQimPLcx7vixG1d3xze0/zwlp3U+4OQSxQtWGPvlQycjQ OWS915IcqbbkKJq4IIw7obupxEk8T+QN4miTbqHT1ZX4aq8+UqMQuBHFcO8ZovUCP5 3LmjkJboG1YrX9yEsbKP+AnPW8zgwjcqP5YUZmv7ZhOXSoo2qOSl9tqLlKAYEWkNY9 YWXQvn9eVjvK6dYEkHbf3o3ybAz3E7j9r8iOzilczLmtkkcegtOG2I10VMYL5VUYda 3gQ7OgAZKeaLQ== From: Leon Romanovsky To: Marek Szyprowski Cc: Leon Romanovsky , Christoph Hellwig , Jonathan Corbet , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Robin Murphy , Joerg Roedel , Will Deacon , "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , =?UTF-8?q?Eugenio=20P=C3=A9rez?= , Alexander Potapenko , Marco Elver , Dmitry Vyukov , Masami Hiramatsu , Mathieu Desnoyers , =?UTF-8?q?J=C3=A9r=C3=B4me=20Glisse?= , Andrew Morton , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, iommu@lists.linux.dev, virtualization@lists.linux.dev, kasan-dev@googlegroups.com, linux-trace-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 2/8] dma-mapping: rename trace_dma_*map_page to trace_dma_*map_phys Date: Wed, 25 Jun 2025 16:18:59 +0300 Message-ID: <23ef1117e09e3ca8c51ef2700e902f340856b8b0.1750854543.git.leon@kernel.org> X-Mailer: git-send-email 2.49.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Leon Romanovsky As a preparation for following map_page -> map_phys API conversion, let's rename trace_dma_*map_page() to be trace_dma_*map_phys(). Signed-off-by: Leon Romanovsky --- include/trace/events/dma.h | 4 ++-- kernel/dma/mapping.c | 4 ++-- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/include/trace/events/dma.h b/include/trace/events/dma.h index d8ddc27b6a7c..c77d478b6deb 100644 --- a/include/trace/events/dma.h +++ b/include/trace/events/dma.h @@ -71,7 +71,7 @@ DEFINE_EVENT(dma_map, name, \ size_t size, enum dma_data_direction dir, unsigned long attrs), \ TP_ARGS(dev, phys_addr, dma_addr, size, dir, attrs)) =20 -DEFINE_MAP_EVENT(dma_map_page); +DEFINE_MAP_EVENT(dma_map_phys); DEFINE_MAP_EVENT(dma_map_resource); =20 DECLARE_EVENT_CLASS(dma_unmap, @@ -109,7 +109,7 @@ DEFINE_EVENT(dma_unmap, name, \ enum dma_data_direction dir, unsigned long attrs), \ TP_ARGS(dev, addr, size, dir, attrs)) =20 -DEFINE_UNMAP_EVENT(dma_unmap_page); +DEFINE_UNMAP_EVENT(dma_unmap_phys); DEFINE_UNMAP_EVENT(dma_unmap_resource); =20 DECLARE_EVENT_CLASS(dma_alloc_class, diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c index 4c1dfbabb8ae..fe1f0da6dc50 100644 --- a/kernel/dma/mapping.c +++ b/kernel/dma/mapping.c @@ -173,7 +173,7 @@ dma_addr_t dma_map_page_attrs(struct device *dev, struc= t page *page, else addr =3D ops->map_page(dev, page, offset, size, dir, attrs); kmsan_handle_dma(page, offset, size, dir); - trace_dma_map_page(dev, phys, addr, size, dir, attrs); + trace_dma_map_phys(dev, phys, addr, size, dir, attrs); debug_dma_map_phys(dev, phys, size, dir, addr, attrs); =20 return addr; @@ -193,7 +193,7 @@ void dma_unmap_page_attrs(struct device *dev, dma_addr_= t addr, size_t size, iommu_dma_unmap_page(dev, addr, size, dir, attrs); else ops->unmap_page(dev, addr, size, dir, attrs); - trace_dma_unmap_page(dev, addr, size, dir, attrs); + trace_dma_unmap_phys(dev, addr, size, dir, attrs); debug_dma_unmap_phys(dev, addr, size, dir); } EXPORT_SYMBOL(dma_unmap_page_attrs); --=20 2.49.0 From nobody Wed Oct 8 17:29:55 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C4066267B90; Wed, 25 Jun 2025 13:19:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750857570; cv=none; b=HhNwVLPuEoVkNoklMyKjODAUOXBSh6k2ZHaQ+uv61559bQzZS6VfBsLfeI2zhDlcyufau5r181mUCxsdoC5N4rg+GB+o6kIU1d5byY8JaF8I93GbaSVcXa0OjdLcihU3K/nRq0H/62EDmTLUh5Y0xD5SrOOUrkO4kRHgBnQu+/U= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750857570; c=relaxed/simple; bh=SIZyAYw6EeJesl2xvRvX+slrHOcMXE4gqSctQlG7kE4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=LH24QCRcv2m3RzMqGG6zoLfL/mYM8L21+Afg/L3nctqy25H34Mr17ERNh6Vl5xNczK8I0gyWANeAG2jWG3xZtE74StHjKQAWDFpLp2T/H2IpnjkR4tod0Yned8HrA2TqIw1NwnEorczgW4m9e7RcyM0wrr1tT9TETNSNPGy/TAE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=NnBSLNRQ; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="NnBSLNRQ" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9D15CC4CEF0; Wed, 25 Jun 2025 13:19:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1750857570; bh=SIZyAYw6EeJesl2xvRvX+slrHOcMXE4gqSctQlG7kE4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=NnBSLNRQ1YLXOoO/E/CtdZLkiZlzE9zjiZ5IARF6vGB6kZNuHX7J5xLN4djIRQbNL qw9CBTxf/3FYOin9KEig7uak+ZG5deOQ8Ine5/YMX7oaqS2cagSh7C/FPGykfb+TDB tZNfJs7TLQa3KJjnOUd3GnjDC/8XKvIt7V5BUhZcv2wDUM2XEbhTqrGRaQS2/xwb2z 2Zhw0LkFdhs31KYZQ+U96Pij141InBpEqARNUpAPzW5WpbM+k9pc3es1gZ9M3GW3yZ Ag6gcmzzAqrR6ob1Gqbr8jGx0rDNJvZZZC8c8QDYAVw6ie4w77hZf4gLrOg9u3PNAi uOKjYgR8CAliw== From: Leon Romanovsky To: Marek Szyprowski Cc: Leon Romanovsky , Christoph Hellwig , Jonathan Corbet , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Robin Murphy , Joerg Roedel , Will Deacon , "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , =?UTF-8?q?Eugenio=20P=C3=A9rez?= , Alexander Potapenko , Marco Elver , Dmitry Vyukov , Masami Hiramatsu , Mathieu Desnoyers , =?UTF-8?q?J=C3=A9r=C3=B4me=20Glisse?= , Andrew Morton , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, iommu@lists.linux.dev, virtualization@lists.linux.dev, kasan-dev@googlegroups.com, linux-trace-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 3/8] iommu/dma: rename iommu_dma_*map_page to iommu_dma_*map_phys Date: Wed, 25 Jun 2025 16:19:00 +0300 Message-ID: X-Mailer: git-send-email 2.49.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Leon Romanovsky Rename the IOMMU DMA mapping functions to better reflect their actual calling convention. The functions iommu_dma_map_page() and iommu_dma_unmap_page() are renamed to iommu_dma_map_phys() and iommu_dma_unmap_phys() respectively, as they already operate on physical addresses rather than page structures. The calling convention changes from accepting (struct page *page, unsigned long offset) to (phys_addr_t phys), which eliminates the need for page-to-physical address conversion within the functions. This renaming prepares for the broader DMA API conversion from page-based to physical address-based mapping throughout the kernel. All callers are updated to pass physical addresses directly, including dma_map_page_attrs(), scatterlist mapping functions, and DMA page allocation helpers. The change simplifies the code by removing the page_to_phys() + offset calculation that was previously done inside the IOMMU functions. Signed-off-by: Leon Romanovsky --- drivers/iommu/dma-iommu.c | 14 ++++++-------- include/linux/iommu-dma.h | 7 +++---- kernel/dma/mapping.c | 4 ++-- kernel/dma/ops_helpers.c | 6 +++--- 4 files changed, 14 insertions(+), 17 deletions(-) diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index ea2ef53bd4fe..cd4bc22efa96 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -1190,11 +1190,9 @@ static inline size_t iova_unaligned(struct iova_doma= in *iovad, phys_addr_t phys, return iova_offset(iovad, phys | size); } =20 -dma_addr_t iommu_dma_map_page(struct device *dev, struct page *page, - unsigned long offset, size_t size, enum dma_data_direction dir, - unsigned long attrs) +dma_addr_t iommu_dma_map_phys(struct device *dev, phys_addr_t phys, size_t= size, + enum dma_data_direction dir, unsigned long attrs) { - phys_addr_t phys =3D page_to_phys(page) + offset; bool coherent =3D dev_is_dma_coherent(dev); int prot =3D dma_info_to_prot(dir, coherent, attrs); struct iommu_domain *domain =3D iommu_get_dma_domain(dev); @@ -1222,7 +1220,7 @@ dma_addr_t iommu_dma_map_page(struct device *dev, str= uct page *page, return iova; } =20 -void iommu_dma_unmap_page(struct device *dev, dma_addr_t dma_handle, +void iommu_dma_unmap_phys(struct device *dev, dma_addr_t dma_handle, size_t size, enum dma_data_direction dir, unsigned long attrs) { struct iommu_domain *domain =3D iommu_get_dma_domain(dev); @@ -1341,7 +1339,7 @@ static void iommu_dma_unmap_sg_swiotlb(struct device = *dev, struct scatterlist *s int i; =20 for_each_sg(sg, s, nents, i) - iommu_dma_unmap_page(dev, sg_dma_address(s), + iommu_dma_unmap_phys(dev, sg_dma_address(s), sg_dma_len(s), dir, attrs); } =20 @@ -1354,8 +1352,8 @@ static int iommu_dma_map_sg_swiotlb(struct device *de= v, struct scatterlist *sg, sg_dma_mark_swiotlb(sg); =20 for_each_sg(sg, s, nents, i) { - sg_dma_address(s) =3D iommu_dma_map_page(dev, sg_page(s), - s->offset, s->length, dir, attrs); + sg_dma_address(s) =3D iommu_dma_map_phys(dev, sg_phys(s), + s->length, dir, attrs); if (sg_dma_address(s) =3D=3D DMA_MAPPING_ERROR) goto out_unmap; sg_dma_len(s) =3D s->length; diff --git a/include/linux/iommu-dma.h b/include/linux/iommu-dma.h index 508beaa44c39..485bdffed988 100644 --- a/include/linux/iommu-dma.h +++ b/include/linux/iommu-dma.h @@ -21,10 +21,9 @@ static inline bool use_dma_iommu(struct device *dev) } #endif /* CONFIG_IOMMU_DMA */ =20 -dma_addr_t iommu_dma_map_page(struct device *dev, struct page *page, - unsigned long offset, size_t size, enum dma_data_direction dir, - unsigned long attrs); -void iommu_dma_unmap_page(struct device *dev, dma_addr_t dma_handle, +dma_addr_t iommu_dma_map_phys(struct device *dev, phys_addr_t phys, size_t= size, + enum dma_data_direction dir, unsigned long attrs); +void iommu_dma_unmap_phys(struct device *dev, dma_addr_t dma_handle, size_t size, enum dma_data_direction dir, unsigned long attrs); int iommu_dma_map_sg(struct device *dev, struct scatterlist *sg, int nents, enum dma_data_direction dir, unsigned long attrs); diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c index fe1f0da6dc50..58482536db9b 100644 --- a/kernel/dma/mapping.c +++ b/kernel/dma/mapping.c @@ -169,7 +169,7 @@ dma_addr_t dma_map_page_attrs(struct device *dev, struc= t page *page, arch_dma_map_page_direct(dev, phys + size)) addr =3D dma_direct_map_page(dev, page, offset, size, dir, attrs); else if (use_dma_iommu(dev)) - addr =3D iommu_dma_map_page(dev, page, offset, size, dir, attrs); + addr =3D iommu_dma_map_phys(dev, phys, size, dir, attrs); else addr =3D ops->map_page(dev, page, offset, size, dir, attrs); kmsan_handle_dma(page, offset, size, dir); @@ -190,7 +190,7 @@ void dma_unmap_page_attrs(struct device *dev, dma_addr_= t addr, size_t size, arch_dma_unmap_page_direct(dev, addr + size)) dma_direct_unmap_page(dev, addr, size, dir, attrs); else if (use_dma_iommu(dev)) - iommu_dma_unmap_page(dev, addr, size, dir, attrs); + iommu_dma_unmap_phys(dev, addr, size, dir, attrs); else ops->unmap_page(dev, addr, size, dir, attrs); trace_dma_unmap_phys(dev, addr, size, dir, attrs); diff --git a/kernel/dma/ops_helpers.c b/kernel/dma/ops_helpers.c index 9afd569eadb9..6f9d604d9d40 100644 --- a/kernel/dma/ops_helpers.c +++ b/kernel/dma/ops_helpers.c @@ -72,8 +72,8 @@ struct page *dma_common_alloc_pages(struct device *dev, s= ize_t size, return NULL; =20 if (use_dma_iommu(dev)) - *dma_handle =3D iommu_dma_map_page(dev, page, 0, size, dir, - DMA_ATTR_SKIP_CPU_SYNC); + *dma_handle =3D iommu_dma_map_phys(dev, page_to_phys(page), size, + dir, DMA_ATTR_SKIP_CPU_SYNC); else *dma_handle =3D ops->map_page(dev, page, 0, size, dir, DMA_ATTR_SKIP_CPU_SYNC); @@ -92,7 +92,7 @@ void dma_common_free_pages(struct device *dev, size_t siz= e, struct page *page, const struct dma_map_ops *ops =3D get_dma_ops(dev); =20 if (use_dma_iommu(dev)) - iommu_dma_unmap_page(dev, dma_handle, size, dir, + iommu_dma_unmap_phys(dev, dma_handle, size, dir, DMA_ATTR_SKIP_CPU_SYNC); else if (ops->unmap_page) ops->unmap_page(dev, dma_handle, size, dir, --=20 2.49.0 From nobody Wed Oct 8 17:29:55 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id ECEB126AA83; Wed, 25 Jun 2025 13:19:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750857587; cv=none; b=YVMr7MSxPcg1HVKzWe5vEa2AnsFA17MbIfkwAiP1ArM0+TJl5aVDkvTdCItzcgjqVEk19KS6hn8LZ1dGnAyHXhuNkT6A+OYYxCDx1t91LVuMHL/bUovBMp7uo9OVEAdzQkrqPdxkQYbB4dmzjk0WwT9PJs1KsehOH45Xj5uZ2Jc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750857587; c=relaxed/simple; bh=s3ricX6F0gnoDUjq0E8NJAWhkL9Tw8iVfIdzfwt4kfE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=A498rlDc8LlFPIFRapoqaWwSSUGqn1+DyxmGvjCSs/BNYcEBLXJI1EB5sn10rc4F8DWj0fQZBx4GpAuRZde9M3ry/VGUEZ9XLdkItzTaYwd6opNAxjs19cPEqKtYl31dS+CPooh+QfL3XPIDAnakH64zATCreJnzMYTZe5JbUIM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=cnTVV8FS; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="cnTVV8FS" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 592E4C4CEEE; Wed, 25 Jun 2025 13:19:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1750857586; bh=s3ricX6F0gnoDUjq0E8NJAWhkL9Tw8iVfIdzfwt4kfE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=cnTVV8FSWRgTINDsBkK+iotwyFKWOB54N/KKiBJwLkRN14YJAdgWdogyeUVp7oYnS JQF5cLFJZPI6gXGVsijnU1DUn6cDooA9QKF0JLQlCux/vHk8s/QC9qvVG2ato2Jy9Z /MJsSnpP1evmTTc5iPOEKZtQRZLSiF2QWSP0sWS1zQnw07TNmRSMsyfbaQIy79hYVS 3E2bNHn79TeUYQSZ+MekASQWX0u51qZPgs+5KBB7c/JI61L5WzuQJ2XY8AUNyXt0eY Fo41SZbgHn68WZRUMguZ35VM+86zZejCu/SQgHrRC4PKYvU90HLDEllnWVq8lFzC/8 jwx/UnEJ4Xgzg== From: Leon Romanovsky To: Marek Szyprowski Cc: Leon Romanovsky , Christoph Hellwig , Jonathan Corbet , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Robin Murphy , Joerg Roedel , Will Deacon , "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , =?UTF-8?q?Eugenio=20P=C3=A9rez?= , Alexander Potapenko , Marco Elver , Dmitry Vyukov , Masami Hiramatsu , Mathieu Desnoyers , =?UTF-8?q?J=C3=A9r=C3=B4me=20Glisse?= , Andrew Morton , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, iommu@lists.linux.dev, virtualization@lists.linux.dev, kasan-dev@googlegroups.com, linux-trace-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 4/8] dma-mapping: convert dma_direct_*map_page to be phys_addr_t based Date: Wed, 25 Jun 2025 16:19:01 +0300 Message-ID: <1165abafc7d4bd2eed2cc89480b68111fe6fd13d.1750854543.git.leon@kernel.org> X-Mailer: git-send-email 2.49.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Leon Romanovsky Convert the DMA direct mapping functions to accept physical addresses directly instead of page+offset parameters. The functions were already operating on physical addresses internally, so this change eliminates the redundant page-to-physical conversion at the API boundary. The functions dma_direct_map_page() and dma_direct_unmap_page() are renamed to dma_direct_map_phys() and dma_direct_unmap_phys() respectively, with their calling convention changed from (struct page *page, unsigned long offset) to (phys_addr_t phys). Architecture-specific functions arch_dma_map_page_direct() and arch_dma_unmap_page_direct() are similarly renamed to arch_dma_map_phys_direct() and arch_dma_unmap_phys_direct(). The is_pci_p2pdma_page() checks are replaced with pfn_valid() checks using PHYS_PFN(phys). This provides more accurate validation for non-page backed memory regions without need to have "faked" struct page. Signed-off-by: Leon Romanovsky --- arch/powerpc/kernel/dma-iommu.c | 4 ++-- include/linux/dma-map-ops.h | 8 ++++---- kernel/dma/direct.c | 6 +++--- kernel/dma/direct.h | 13 ++++++------- kernel/dma/mapping.c | 8 ++++---- 5 files changed, 19 insertions(+), 20 deletions(-) diff --git a/arch/powerpc/kernel/dma-iommu.c b/arch/powerpc/kernel/dma-iomm= u.c index 4d64a5db50f3..0359ab72cd3b 100644 --- a/arch/powerpc/kernel/dma-iommu.c +++ b/arch/powerpc/kernel/dma-iommu.c @@ -14,7 +14,7 @@ #define can_map_direct(dev, addr) \ ((dev)->bus_dma_limit >=3D phys_to_dma((dev), (addr))) =20 -bool arch_dma_map_page_direct(struct device *dev, phys_addr_t addr) +bool arch_dma_map_phys_direct(struct device *dev, phys_addr_t addr) { if (likely(!dev->bus_dma_limit)) return false; @@ -24,7 +24,7 @@ bool arch_dma_map_page_direct(struct device *dev, phys_ad= dr_t addr) =20 #define is_direct_handle(dev, h) ((h) >=3D (dev)->archdata.dma_offset) =20 -bool arch_dma_unmap_page_direct(struct device *dev, dma_addr_t dma_handle) +bool arch_dma_unmap_phys_direct(struct device *dev, dma_addr_t dma_handle) { if (likely(!dev->bus_dma_limit)) return false; diff --git a/include/linux/dma-map-ops.h b/include/linux/dma-map-ops.h index f48e5fb88bd5..71f5b3025415 100644 --- a/include/linux/dma-map-ops.h +++ b/include/linux/dma-map-ops.h @@ -392,15 +392,15 @@ void *arch_dma_set_uncached(void *addr, size_t size); void arch_dma_clear_uncached(void *addr, size_t size); =20 #ifdef CONFIG_ARCH_HAS_DMA_MAP_DIRECT -bool arch_dma_map_page_direct(struct device *dev, phys_addr_t addr); -bool arch_dma_unmap_page_direct(struct device *dev, dma_addr_t dma_handle); +bool arch_dma_map_phys_direct(struct device *dev, phys_addr_t addr); +bool arch_dma_unmap_phys_direct(struct device *dev, dma_addr_t dma_handle); bool arch_dma_map_sg_direct(struct device *dev, struct scatterlist *sg, int nents); bool arch_dma_unmap_sg_direct(struct device *dev, struct scatterlist *sg, int nents); #else -#define arch_dma_map_page_direct(d, a) (false) -#define arch_dma_unmap_page_direct(d, a) (false) +#define arch_dma_map_phys_direct(d, a) (false) +#define arch_dma_unmap_phys_direct(d, a) (false) #define arch_dma_map_sg_direct(d, s, n) (false) #define arch_dma_unmap_sg_direct(d, s, n) (false) #endif diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c index 24c359d9c879..fa75e3070073 100644 --- a/kernel/dma/direct.c +++ b/kernel/dma/direct.c @@ -453,7 +453,7 @@ void dma_direct_unmap_sg(struct device *dev, struct sca= tterlist *sgl, if (sg_dma_is_bus_address(sg)) sg_dma_unmark_bus_address(sg); else - dma_direct_unmap_page(dev, sg->dma_address, + dma_direct_unmap_phys(dev, sg->dma_address, sg_dma_len(sg), dir, attrs); } } @@ -476,8 +476,8 @@ int dma_direct_map_sg(struct device *dev, struct scatte= rlist *sgl, int nents, */ break; case PCI_P2PDMA_MAP_NONE: - sg->dma_address =3D dma_direct_map_page(dev, sg_page(sg), - sg->offset, sg->length, dir, attrs); + sg->dma_address =3D dma_direct_map_phys(dev, sg_phys(sg), + sg->length, dir, attrs); if (sg->dma_address =3D=3D DMA_MAPPING_ERROR) { ret =3D -EIO; goto out_unmap; diff --git a/kernel/dma/direct.h b/kernel/dma/direct.h index d2c0b7e632fc..10c1ba73c482 100644 --- a/kernel/dma/direct.h +++ b/kernel/dma/direct.h @@ -80,22 +80,21 @@ static inline void dma_direct_sync_single_for_cpu(struc= t device *dev, arch_dma_mark_clean(paddr, size); } =20 -static inline dma_addr_t dma_direct_map_page(struct device *dev, - struct page *page, unsigned long offset, size_t size, - enum dma_data_direction dir, unsigned long attrs) +static inline dma_addr_t dma_direct_map_phys(struct device *dev, + phys_addr_t phys, size_t size, enum dma_data_direction dir, + unsigned long attrs) { - phys_addr_t phys =3D page_to_phys(page) + offset; dma_addr_t dma_addr =3D phys_to_dma(dev, phys); =20 if (is_swiotlb_force_bounce(dev)) { - if (is_pci_p2pdma_page(page)) + if (!pfn_valid(PHYS_PFN(phys))) return DMA_MAPPING_ERROR; return swiotlb_map(dev, phys, size, dir, attrs); } =20 if (unlikely(!dma_capable(dev, dma_addr, size, true)) || dma_kmalloc_needs_bounce(dev, size, dir)) { - if (is_pci_p2pdma_page(page)) + if (!pfn_valid(PHYS_PFN(phys))) return DMA_MAPPING_ERROR; if (is_swiotlb_active(dev)) return swiotlb_map(dev, phys, size, dir, attrs); @@ -111,7 +110,7 @@ static inline dma_addr_t dma_direct_map_page(struct dev= ice *dev, return dma_addr; } =20 -static inline void dma_direct_unmap_page(struct device *dev, dma_addr_t ad= dr, +static inline void dma_direct_unmap_phys(struct device *dev, dma_addr_t ad= dr, size_t size, enum dma_data_direction dir, unsigned long attrs) { phys_addr_t phys =3D dma_to_phys(dev, addr); diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c index 58482536db9b..80481a873340 100644 --- a/kernel/dma/mapping.c +++ b/kernel/dma/mapping.c @@ -166,8 +166,8 @@ dma_addr_t dma_map_page_attrs(struct device *dev, struc= t page *page, return DMA_MAPPING_ERROR; =20 if (dma_map_direct(dev, ops) || - arch_dma_map_page_direct(dev, phys + size)) - addr =3D dma_direct_map_page(dev, page, offset, size, dir, attrs); + arch_dma_map_phys_direct(dev, phys + size)) + addr =3D dma_direct_map_phys(dev, phys, size, dir, attrs); else if (use_dma_iommu(dev)) addr =3D iommu_dma_map_phys(dev, phys, size, dir, attrs); else @@ -187,8 +187,8 @@ void dma_unmap_page_attrs(struct device *dev, dma_addr_= t addr, size_t size, =20 BUG_ON(!valid_dma_direction(dir)); if (dma_map_direct(dev, ops) || - arch_dma_unmap_page_direct(dev, addr + size)) - dma_direct_unmap_page(dev, addr, size, dir, attrs); + arch_dma_unmap_phys_direct(dev, addr + size)) + dma_direct_unmap_phys(dev, addr, size, dir, attrs); else if (use_dma_iommu(dev)) iommu_dma_unmap_phys(dev, addr, size, dir, attrs); else --=20 2.49.0 From nobody Wed Oct 8 17:29:55 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id B7F07267B90; Wed, 25 Jun 2025 13:19:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750857576; cv=none; b=bvOxPYNoDWO2xzscwtY0rEJ0udQTmtXl1FIO4mtNNO9oT16qi7SlrXbTF5MdXbZEVrNtdkaNXLgQElZJ1/33BKNFING4Im3QOHxGslHd3Uq3EyE2MALDZiyGbUcbC+7EB3JH39NTb5qGzzna8D6xOd/l83DPkSqYhw7HDwX7QGc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750857576; c=relaxed/simple; bh=ofFH7lyPLUqr67tQilbxRRQzne1tZ8ChYg2mo49x+DM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=tQMLmH1f4ALOYucLMBqG6yrSDXULl54vm4uvbQgQz6oaf24RFnCxF1taiQBIjvWTYTYmX5X1nJHc1DMW528SyKb++EDsflUNYEmgMy8jtNcg7Qz4RKnvmj1LGc9ycPyTj2zt8dnKBZ5uoPg0mQW3GnaBF+zOeVDDtEnp/GGmvnE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=DKvWYDHn; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="DKvWYDHn" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 962DEC4CEF4; Wed, 25 Jun 2025 13:19:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1750857574; bh=ofFH7lyPLUqr67tQilbxRRQzne1tZ8ChYg2mo49x+DM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=DKvWYDHnU6FbZFwFOnVaAQdw0mr4e0/aKQai8rFK8PLJiWYS/4rqj7AZ0fN4Ulkt/ yv3PHD2t0jMvZNaZSz+Z9UTvya58PO0QNUgFgB+zK9tOiM7UQ3EiA7uP4KDWo582Pb Zg5DwSgbJJvs0KzyT9o9Et+xwig8daMWdlmk0GO9sTO2DaQkWUjmeyX9Ts6/Rg4nra 0SwHZOI9TXyps68oZSAbK2qmuFNpHmCqhiqcRREBWtkVE/3ejcthld5UF0Abgc5i6G I/8CxoABIaxcG24R1Sk2Er7iYtBmXiDqLm6zJChw56Oc3+S19pOljRJ4vlBbrTWneY m766xQEX2v/JQ== From: Leon Romanovsky To: Marek Szyprowski Cc: Leon Romanovsky , Christoph Hellwig , Jonathan Corbet , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Robin Murphy , Joerg Roedel , Will Deacon , "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , =?UTF-8?q?Eugenio=20P=C3=A9rez?= , Alexander Potapenko , Marco Elver , Dmitry Vyukov , Masami Hiramatsu , Mathieu Desnoyers , =?UTF-8?q?J=C3=A9r=C3=B4me=20Glisse?= , Andrew Morton , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, iommu@lists.linux.dev, virtualization@lists.linux.dev, kasan-dev@googlegroups.com, linux-trace-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 5/8] kmsan: convert kmsan_handle_dma to use physical addresses Date: Wed, 25 Jun 2025 16:19:02 +0300 Message-ID: X-Mailer: git-send-email 2.49.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Leon Romanovsky Convert the KMSAN DMA handling function from page-based to physical address-based interface. The refactoring renames kmsan_handle_dma() parameters from accepting (struct page *page, size_t offset, size_t size) to (phys_addr_t phys, size_t size). A PFN_VALID check is added to prevent KMSAN operations on non-page memory, preventing from non struct page backed address, As part of this change, support for highmem addresses is implemented using kmap_local_page() to handle both lowmem and highmem regions properly. All callers throughout the codebase are updated to use the new phys_addr_t based interface. Signed-off-by: Leon Romanovsky Acked-by: Alexander Potapenko --- drivers/virtio/virtio_ring.c | 4 ++-- include/linux/kmsan.h | 12 +++++++----- kernel/dma/mapping.c | 2 +- mm/kmsan/hooks.c | 36 +++++++++++++++++++++++++++++------- tools/virtio/linux/kmsan.h | 2 +- 5 files changed, 40 insertions(+), 16 deletions(-) diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c index b784aab66867..dab49385e3e8 100644 --- a/drivers/virtio/virtio_ring.c +++ b/drivers/virtio/virtio_ring.c @@ -378,7 +378,7 @@ static int vring_map_one_sg(const struct vring_virtqueu= e *vq, struct scatterlist * is initialized by the hardware. Explicitly check/unpoison it * depending on the direction. */ - kmsan_handle_dma(sg_page(sg), sg->offset, sg->length, direction); + kmsan_handle_dma(sg_phys(sg), sg->length, direction); *addr =3D (dma_addr_t)sg_phys(sg); return 0; } @@ -3149,7 +3149,7 @@ dma_addr_t virtqueue_dma_map_single_attrs(struct virt= queue *_vq, void *ptr, struct vring_virtqueue *vq =3D to_vvq(_vq); =20 if (!vq->use_dma_api) { - kmsan_handle_dma(virt_to_page(ptr), offset_in_page(ptr), size, dir); + kmsan_handle_dma(virt_to_phys(ptr), size, dir); return (dma_addr_t)virt_to_phys(ptr); } =20 diff --git a/include/linux/kmsan.h b/include/linux/kmsan.h index 2b1432cc16d5..6f27b9824ef7 100644 --- a/include/linux/kmsan.h +++ b/include/linux/kmsan.h @@ -182,8 +182,7 @@ void kmsan_iounmap_page_range(unsigned long start, unsi= gned long end); =20 /** * kmsan_handle_dma() - Handle a DMA data transfer. - * @page: first page of the buffer. - * @offset: offset of the buffer within the first page. + * @phys: physical address of the buffer. * @size: buffer size. * @dir: one of possible dma_data_direction values. * @@ -191,8 +190,11 @@ void kmsan_iounmap_page_range(unsigned long start, uns= igned long end); * * checks the buffer, if it is copied to device; * * initializes the buffer, if it is copied from device; * * does both, if this is a DMA_BIDIRECTIONAL transfer. + * + * The function handles page lookup internally and supports both lowmem + * and highmem addresses. */ -void kmsan_handle_dma(struct page *page, size_t offset, size_t size, +void kmsan_handle_dma(phys_addr_t phys, size_t size, enum dma_data_direction dir); =20 /** @@ -372,8 +374,8 @@ static inline void kmsan_iounmap_page_range(unsigned lo= ng start, { } =20 -static inline void kmsan_handle_dma(struct page *page, size_t offset, - size_t size, enum dma_data_direction dir) +static inline void kmsan_handle_dma(phys_addr_t phys, size_t size, + enum dma_data_direction dir) { } =20 diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c index 80481a873340..709405d46b2b 100644 --- a/kernel/dma/mapping.c +++ b/kernel/dma/mapping.c @@ -172,7 +172,7 @@ dma_addr_t dma_map_page_attrs(struct device *dev, struc= t page *page, addr =3D iommu_dma_map_phys(dev, phys, size, dir, attrs); else addr =3D ops->map_page(dev, page, offset, size, dir, attrs); - kmsan_handle_dma(page, offset, size, dir); + kmsan_handle_dma(phys, size, dir); trace_dma_map_phys(dev, phys, addr, size, dir, attrs); debug_dma_map_phys(dev, phys, size, dir, addr, attrs); =20 diff --git a/mm/kmsan/hooks.c b/mm/kmsan/hooks.c index 97de3d6194f0..eab7912a3bf0 100644 --- a/mm/kmsan/hooks.c +++ b/mm/kmsan/hooks.c @@ -336,25 +336,48 @@ static void kmsan_handle_dma_page(const void *addr, s= ize_t size, } =20 /* Helper function to handle DMA data transfers. */ -void kmsan_handle_dma(struct page *page, size_t offset, size_t size, +void kmsan_handle_dma(phys_addr_t phys, size_t size, enum dma_data_direction dir) { u64 page_offset, to_go, addr; + struct page *page; + void *kaddr; =20 - if (PageHighMem(page)) + if (!pfn_valid(PHYS_PFN(phys))) return; - addr =3D (u64)page_address(page) + offset; + + page =3D phys_to_page(phys); + page_offset =3D offset_in_page(phys); + /* * The kernel may occasionally give us adjacent DMA pages not belonging * to the same allocation. Process them separately to avoid triggering * internal KMSAN checks. */ while (size > 0) { - page_offset =3D offset_in_page(addr); to_go =3D min(PAGE_SIZE - page_offset, (u64)size); + + if (PageHighMem(page)) + /* Handle highmem pages using kmap */ + kaddr =3D kmap_local_page(page); + else + /* Lowmem pages can be accessed directly */ + kaddr =3D page_address(page); + + addr =3D (u64)kaddr + page_offset; kmsan_handle_dma_page((void *)addr, to_go, dir); - addr +=3D to_go; + + if (PageHighMem(page)) + kunmap_local(page); + + phys +=3D to_go; size -=3D to_go; + + /* Move to next page if needed */ + if (size > 0) { + page =3D phys_to_page(phys); + page_offset =3D offset_in_page(phys); + } } } EXPORT_SYMBOL_GPL(kmsan_handle_dma); @@ -366,8 +389,7 @@ void kmsan_handle_dma_sg(struct scatterlist *sg, int ne= nts, int i; =20 for_each_sg(sg, item, nents, i) - kmsan_handle_dma(sg_page(item), item->offset, item->length, - dir); + kmsan_handle_dma(sg_phys(item), item->length, dir); } =20 /* Functions from kmsan-checks.h follow. */ diff --git a/tools/virtio/linux/kmsan.h b/tools/virtio/linux/kmsan.h index 272b5aa285d5..6cd2e3efd03d 100644 --- a/tools/virtio/linux/kmsan.h +++ b/tools/virtio/linux/kmsan.h @@ -4,7 +4,7 @@ =20 #include =20 -inline void kmsan_handle_dma(struct page *page, size_t offset, size_t size, +inline void kmsan_handle_dma(phys_addr_t phys, size_t size, enum dma_data_direction dir) { } --=20 2.49.0 From nobody Wed Oct 8 17:29:55 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 93273269B08; Wed, 25 Jun 2025 13:19:38 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750857578; cv=none; b=FppWF8lS+1+a+BJ7mMV8oYnvU+9L5C9b8tldz/wUrLtb6xNrQzxeC1rKWt9wjtvETJOwb6kkrpvMTH8EIYO2PMN+TIGpsAZaeIkHJBvAlxm0FI8qkB36ZFwD6T09lxAhUDx0Jet75bmSNaFCG3iPZIkgtNYPt4ARLhTeGE/9qlk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750857578; c=relaxed/simple; bh=pv7jnMJTkJVOWVb/twnbCICvjtCn/HJgKS2K42rWa78=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=SMZGGKidSHzVcn89nwmjRY+4Kfu5oVv3xA+Pm58juHHrxivCwmIOsnkQOCgQ/2tNOaoe8FMkuR/WD/JoaEPRtcWmPSRkZT+RM4GDCnPi43usLFHY4JcFfjN0o5pBU02v9cOyj9jseaV4l1korkXVwKEbS/aOL+0K/I+DDpe2kpg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=JmvSEIY+; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="JmvSEIY+" Received: by smtp.kernel.org (Postfix) with ESMTPSA id C193CC4CEEA; Wed, 25 Jun 2025 13:19:37 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1750857578; bh=pv7jnMJTkJVOWVb/twnbCICvjtCn/HJgKS2K42rWa78=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=JmvSEIY+7ban/csViJFvTitjdhG8lqD1RrM9Bs5vfSX9RUXW8VcItqnFKYK2IHLEp Oj2lQpsBja99AvsQZszqc5HRTCd6sTT2RtsSsG6njLkXmU7qMY7Vv1lVgBzXyQPbYb mwcqp68DZHG0wv/LITtswgnUm5CJOJrlmJA71Izf1q+m6fso9FWpqVFZFJs+E3/PJL bafB00mS656RTCoymMmH42juG8s5ossf9tDfV3oq8Q1fChuX8RreRxr/WBdOxO1x0w XEs/Im4vy1Epw96JlCjF5ya6mj/Q07IUEwpzgcRddZa4ZUr/RT6TslLGklPWGDJsVe KAxqIq8HM4v0Q== From: Leon Romanovsky To: Marek Szyprowski Cc: Leon Romanovsky , Christoph Hellwig , Jonathan Corbet , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Robin Murphy , Joerg Roedel , Will Deacon , "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , =?UTF-8?q?Eugenio=20P=C3=A9rez?= , Alexander Potapenko , Marco Elver , Dmitry Vyukov , Masami Hiramatsu , Mathieu Desnoyers , =?UTF-8?q?J=C3=A9r=C3=B4me=20Glisse?= , Andrew Morton , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, iommu@lists.linux.dev, virtualization@lists.linux.dev, kasan-dev@googlegroups.com, linux-trace-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 6/8] dma-mapping: fail early if physical address is mapped through platform callback Date: Wed, 25 Jun 2025 16:19:03 +0300 Message-ID: <5fc1f0ca52a85834b3e978c5d6a3171d7dd3c194.1750854543.git.leon@kernel.org> X-Mailer: git-send-email 2.49.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Leon Romanovsky All platforms which implement map_page interface don't support physical addresses without real struct page. Add condition to check it. Signed-off-by: Leon Romanovsky --- kernel/dma/mapping.c | 15 ++++++++++++++- 1 file changed, 14 insertions(+), 1 deletion(-) diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c index 709405d46b2b..74efb6909103 100644 --- a/kernel/dma/mapping.c +++ b/kernel/dma/mapping.c @@ -158,6 +158,7 @@ dma_addr_t dma_map_page_attrs(struct device *dev, struc= t page *page, { const struct dma_map_ops *ops =3D get_dma_ops(dev); phys_addr_t phys =3D page_to_phys(page) + offset; + bool is_pfn_valid =3D true; dma_addr_t addr; =20 BUG_ON(!valid_dma_direction(dir)); @@ -170,8 +171,20 @@ dma_addr_t dma_map_page_attrs(struct device *dev, stru= ct page *page, addr =3D dma_direct_map_phys(dev, phys, size, dir, attrs); else if (use_dma_iommu(dev)) addr =3D iommu_dma_map_phys(dev, phys, size, dir, attrs); - else + else { + if (IS_ENABLED(CONFIG_DMA_API_DEBUG)) + is_pfn_valid =3D pfn_valid(PHYS_PFN(phys)); + + if (unlikely(!is_pfn_valid)) + return DMA_MAPPING_ERROR; + + /* + * All platforms which implement .map_page() don't support + * non-struct page backed addresses. + */ addr =3D ops->map_page(dev, page, offset, size, dir, attrs); + } + kmsan_handle_dma(phys, size, dir); trace_dma_map_phys(dev, phys, addr, size, dir, attrs); debug_dma_map_phys(dev, phys, size, dir, addr, attrs); --=20 2.49.0 From nobody Wed Oct 8 17:29:55 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A6983269B08; Wed, 25 Jun 2025 13:19:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750857582; cv=none; b=H7JZdOnstfY056d+PLnP/v0sU9vkMaWo1nnspTnqBVkpZSSB7WBzFOUX0lZQ8DNz/qRpXIsYxNQBg9qEuvjy+3xdNo2cqUHCI0ssK4Z6hW73XInQj7abpty4bEAqvofVu6YMY9T0t5T3wq7BYR695GsafDqZ3RxwGjAxZATScxY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750857582; c=relaxed/simple; bh=U1ovqnANomb/cEXYdYYeiuP4LQydDqNW/NB4OU1aqgE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=jBsC+JEQLC8yVXLULblM04w4rPVvhIaERiTjXj4UE7I0lIYrr3Jc3YkZ6SL1qY/67vypjPwA6rvV3JDaihvdHRAs6Kb33qGVPigrA6gD90Hf3x7WGMeEei7Dy8xKu/0nh7Oi4BfGre3S0Y1T11A1VQU6HI2CyUkyzDpP3Kh0CxA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=N0G3gV+E; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="N0G3gV+E" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 54036C4CEEE; Wed, 25 Jun 2025 13:19:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1750857582; bh=U1ovqnANomb/cEXYdYYeiuP4LQydDqNW/NB4OU1aqgE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=N0G3gV+E4IJhrAeOantpO1cwUDcMsTCEPcIcE6wM9Gm6ZtwQv921RF9CtRTt0u2ST 6zAGA6OPg7exrFGuRKaOuTBaAc3+xMSRRas96RdgU6LBRx9z1AHyuVVAaukNs4jqny w87lOQJxPHyIi2wWXWfNDG/uhOfNvtNKL7VptMtzDVYmgSu8W5Ffhyy3cXRQ8W1qeN RO1pVbaELBUG77Cpm2gj2Kg/KSFIyv8/ESeICwpQFGTpNKAYpx78H9+/4AK7jzqdgM J/dSHbdmM6OAh3JFhCQFo+OhjxQc9meDO5S/8S+hV8AyJv+sp1BqLnSLCmXIbqc8/6 ldPiGE4Zu1Z0A== From: Leon Romanovsky To: Marek Szyprowski Cc: Leon Romanovsky , Christoph Hellwig , Jonathan Corbet , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Robin Murphy , Joerg Roedel , Will Deacon , "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , =?UTF-8?q?Eugenio=20P=C3=A9rez?= , Alexander Potapenko , Marco Elver , Dmitry Vyukov , Masami Hiramatsu , Mathieu Desnoyers , =?UTF-8?q?J=C3=A9r=C3=B4me=20Glisse?= , Andrew Morton , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, iommu@lists.linux.dev, virtualization@lists.linux.dev, kasan-dev@googlegroups.com, linux-trace-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 7/8] dma-mapping: export new dma_*map_phys() interface Date: Wed, 25 Jun 2025 16:19:04 +0300 Message-ID: <7013881bb86a37e92ffaf93de6f53701943bf717.1750854543.git.leon@kernel.org> X-Mailer: git-send-email 2.49.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Leon Romanovsky Introduce new DMA mapping functions dma_map_phys() and dma_unmap_phys() that operate directly on physical addresses instead of page+offset parameters. This provides a more efficient interface for drivers that already have physical addresses available. The new functions are implemented as the primary mapping layer, with the existing dma_map_page_attrs() and dma_unmap_page_attrs() functions converted to simple wrappers around the phys-based implementations. The old page-based API is preserved in mapping.c to ensure that existing code won't be affected by changing EXPORT_SYMBOL to EXPORT_SYMBOL_GPL variant for dma_*map_phys(). Signed-off-by: Leon Romanovsky --- include/linux/dma-mapping.h | 13 +++++++++++++ kernel/dma/mapping.c | 25 ++++++++++++++++++++----- 2 files changed, 33 insertions(+), 5 deletions(-) diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h index 55c03e5fe8cb..ba54bbeca861 100644 --- a/include/linux/dma-mapping.h +++ b/include/linux/dma-mapping.h @@ -118,6 +118,10 @@ dma_addr_t dma_map_page_attrs(struct device *dev, stru= ct page *page, unsigned long attrs); void dma_unmap_page_attrs(struct device *dev, dma_addr_t addr, size_t size, enum dma_data_direction dir, unsigned long attrs); +dma_addr_t dma_map_phys(struct device *dev, phys_addr_t phys, size_t size, + enum dma_data_direction dir, unsigned long attrs); +void dma_unmap_phys(struct device *dev, dma_addr_t addr, size_t size, + enum dma_data_direction dir, unsigned long attrs); unsigned int dma_map_sg_attrs(struct device *dev, struct scatterlist *sg, int nents, enum dma_data_direction dir, unsigned long attrs); void dma_unmap_sg_attrs(struct device *dev, struct scatterlist *sg, @@ -172,6 +176,15 @@ static inline void dma_unmap_page_attrs(struct device = *dev, dma_addr_t addr, size_t size, enum dma_data_direction dir, unsigned long attrs) { } +static inline dma_addr_t dma_map_phys(struct device *dev, phys_addr_t phys, + size_t size, enum dma_data_direction dir, unsigned long attrs) +{ + return DMA_MAPPING_ERROR; +} +static inline void dma_unmap_phys(struct device *dev, dma_addr_t addr, + size_t size, enum dma_data_direction dir, unsigned long attrs) +{ +} static inline unsigned int dma_map_sg_attrs(struct device *dev, struct scatterlist *sg, int nents, enum dma_data_direction dir, unsigned long attrs) diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c index 74efb6909103..29e8594a725a 100644 --- a/kernel/dma/mapping.c +++ b/kernel/dma/mapping.c @@ -152,12 +152,12 @@ static inline bool dma_map_direct(struct device *dev, return dma_go_direct(dev, *dev->dma_mask, ops); } =20 -dma_addr_t dma_map_page_attrs(struct device *dev, struct page *page, - size_t offset, size_t size, enum dma_data_direction dir, - unsigned long attrs) +dma_addr_t dma_map_phys(struct device *dev, phys_addr_t phys, size_t size, + enum dma_data_direction dir, unsigned long attrs) { const struct dma_map_ops *ops =3D get_dma_ops(dev); - phys_addr_t phys =3D page_to_phys(page) + offset; + struct page *page =3D phys_to_page(phys); + size_t offset =3D offset_in_page(page); bool is_pfn_valid =3D true; dma_addr_t addr; =20 @@ -191,9 +191,17 @@ dma_addr_t dma_map_page_attrs(struct device *dev, stru= ct page *page, =20 return addr; } +EXPORT_SYMBOL_GPL(dma_map_phys); + +dma_addr_t dma_map_page_attrs(struct device *dev, struct page *page, + size_t offset, size_t size, enum dma_data_direction dir, + unsigned long attrs) +{ + return dma_map_phys(dev, page_to_phys(page) + offset, size, dir, attrs); +} EXPORT_SYMBOL(dma_map_page_attrs); =20 -void dma_unmap_page_attrs(struct device *dev, dma_addr_t addr, size_t size, +void dma_unmap_phys(struct device *dev, dma_addr_t addr, size_t size, enum dma_data_direction dir, unsigned long attrs) { const struct dma_map_ops *ops =3D get_dma_ops(dev); @@ -209,6 +217,13 @@ void dma_unmap_page_attrs(struct device *dev, dma_addr= _t addr, size_t size, trace_dma_unmap_phys(dev, addr, size, dir, attrs); debug_dma_unmap_phys(dev, addr, size, dir); } +EXPORT_SYMBOL_GPL(dma_unmap_phys); + +void dma_unmap_page_attrs(struct device *dev, dma_addr_t addr, size_t size, + enum dma_data_direction dir, unsigned long attrs) +{ + dma_unmap_phys(dev, addr, size, dir, attrs); +} EXPORT_SYMBOL(dma_unmap_page_attrs); =20 static int __dma_map_sg_attrs(struct device *dev, struct scatterlist *sg, --=20 2.49.0 From nobody Wed Oct 8 17:29:55 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EE08A2BDC2E; Wed, 25 Jun 2025 13:19:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750857591; cv=none; b=Afv0ZbR6GXwnXSESyucFHjR9X/7B3WrwkVQFTdOOfy/4RJhdyVh9fxE8NTnLPdNtFiJW4GaJnmnkLk9m3O9xewZ5b7jDxhAjhj41e8EO/wbxO/oUKGHyZxNtMH76JNIVPXuJxfpistm+hnSf6YijLraE5TUA2RJR136U9S8wPl0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750857591; c=relaxed/simple; bh=EhfqM/D/gYgjuxfz5kh30ccEUkl+3JBKSIok1lR06Js=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=piMg7tiAduhp5M6s3puGX40EozLr3jATUQaJHXsS67JvqxsN2dJcsS11mVrtczVbYEAyBy3NThHcJaWc/5t9UTR/+iOfWVTd/pQl6l3v7DPdojg2x//BetFTp2BhSapkrjKNFKY8/4PbQiFqBbWaY24/MqZYmQOi2eTQvekVFgQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=GO7zj8pT; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="GO7zj8pT" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 28477C4CEEA; Wed, 25 Jun 2025 13:19:50 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1750857590; bh=EhfqM/D/gYgjuxfz5kh30ccEUkl+3JBKSIok1lR06Js=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=GO7zj8pTqPb1zaEwV5fxDNnho6jTaP+HMJnZdU7SfWYEupNztVlDcMF4dMt4M5FzS X8Ucn6Ut74Ulz2JzvGUqCeuEQi7GHunAL/vyESY79xQ+pmPDy2MUbJJvSDmQ6NKggn E4GQUvbHOfpF9PIWlEE5HYbRbOLwaadcKo4eVmLuFp8nJr0QK31kWilTdlxDrpFvMl KbKvUCYPkMw9m49t3c8wnxzuHsfoHONCSQmzXoXl0wAaBy3wMkhDev/k9alIT1o+zV S1UTmvWc69dMkYqytBdnw4TBcc+crqsIL5+rKLwMtmQ6imeM+jRXDwaN940jPdPX8I Sq81jKm8uM4hA== From: Leon Romanovsky To: Marek Szyprowski Cc: Leon Romanovsky , Christoph Hellwig , Jonathan Corbet , Madhavan Srinivasan , Michael Ellerman , Nicholas Piggin , Christophe Leroy , Robin Murphy , Joerg Roedel , Will Deacon , "Michael S. Tsirkin" , Jason Wang , Xuan Zhuo , =?UTF-8?q?Eugenio=20P=C3=A9rez?= , Alexander Potapenko , Marco Elver , Dmitry Vyukov , Masami Hiramatsu , Mathieu Desnoyers , =?UTF-8?q?J=C3=A9r=C3=B4me=20Glisse?= , Andrew Morton , linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, iommu@lists.linux.dev, virtualization@lists.linux.dev, kasan-dev@googlegroups.com, linux-trace-kernel@vger.kernel.org, linux-mm@kvack.org Subject: [PATCH 8/8] mm/hmm: migrate to physical address-based DMA mapping API Date: Wed, 25 Jun 2025 16:19:05 +0300 Message-ID: <8a85f4450905fcb6b17d146cc86c11410d522de4.1750854543.git.leon@kernel.org> X-Mailer: git-send-email 2.49.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Leon Romanovsky Convert HMM DMA operations from the legacy page-based API to the new physical address-based dma_map_phys() and dma_unmap_phys() functions. This demonstrates the preferred approach for new code that should use physical addresses directly rather than page+offset parameters. The change replaces dma_map_page() and dma_unmap_page() calls with dma_map_phys() and dma_unmap_phys() respectively, using the physical address that was already available in the code. This eliminates the redundant page-to-physical address conversion and aligns with the DMA subsystem's move toward physical address-centric interfaces. This serves as an example of how new code should be written to leverage the more efficient physical address API, which provides cleaner interfaces for drivers that already have access to physical addresses. Signed-off-by: Leon Romanovsky --- mm/hmm.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/mm/hmm.c b/mm/hmm.c index feac86196a65..9354fae3ae06 100644 --- a/mm/hmm.c +++ b/mm/hmm.c @@ -779,8 +779,8 @@ dma_addr_t hmm_dma_map_pfn(struct device *dev, struct h= mm_dma_map *map, if (WARN_ON_ONCE(dma_need_unmap(dev) && !dma_addrs)) goto error; =20 - dma_addr =3D dma_map_page(dev, page, 0, map->dma_entry_size, - DMA_BIDIRECTIONAL); + dma_addr =3D dma_map_phys(dev, paddr, map->dma_entry_size, + DMA_BIDIRECTIONAL, 0); if (dma_mapping_error(dev, dma_addr)) goto error; =20 @@ -823,8 +823,8 @@ bool hmm_dma_unmap_pfn(struct device *dev, struct hmm_d= ma_map *map, size_t idx) dma_iova_unlink(dev, state, idx * map->dma_entry_size, map->dma_entry_size, DMA_BIDIRECTIONAL, attrs); } else if (dma_need_unmap(dev)) - dma_unmap_page(dev, dma_addrs[idx], map->dma_entry_size, - DMA_BIDIRECTIONAL); + dma_unmap_phys(dev, dma_addrs[idx], map->dma_entry_size, + DMA_BIDIRECTIONAL, 0); =20 pfns[idx] &=3D ~(HMM_PFN_DMA_MAPPED | HMM_PFN_P2PDMA | HMM_PFN_P2PDMA_BUS); --=20 2.49.0