From nobody Sun Oct 5 14:31:48 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=quarantine dis=none) header.from=kernel.org ARC-Seal: i=1; a=rsa-sha256; t=1754311422; cv=none; d=zohomail.com; s=zohoarc; b=FqYnSVXR7tiB9aiO9gxPiw5CAQJ5DK85PPDWNzxoUbHNnWJTuADBeA2sxGaS3je1T8EY+gFwNQg5fLlBnOy+gDvbyQ+e28JBjz4uJbDpSe+Mov69opYzFND6VHn0ItZQ9Sk6GOK7EvQsYBZk94rsMNBUkEfdRbGTW6zse+Wbi90= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1754311422; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=Y9GAfGjuophEQ7ZKuR3iSLNdSI5TrmwdTZOn2v2V0kg=; b=NRZ0bwyIrvmma5EaHVDbSQjTJzwgPpFRJvAzEEkeJMF7PSb+Yjwn+m3UiO/srheRGiknb9rpSo2S7GYHwvJSd3S7s4g6UyOF0GnZMdOs2VjgkAfe8hISmRBAXaHq2Kn02+vj3rNOPyn9mGQiCWgwVojmdBUuT0nFI4OeBpn+vSE= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=quarantine dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1754311422846486.61106627727725; Mon, 4 Aug 2025 05:43:42 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.1069164.1433006 (Exim 4.92) (envelope-from ) id 1uiuXI-0006zd-Ka; Mon, 04 Aug 2025 12:43:28 +0000 Received: by outflank-mailman (output) from mailman id 1069164.1433006; Mon, 04 Aug 2025 12:43:28 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1uiuXI-0006zW-Gf; Mon, 04 Aug 2025 12:43:28 +0000 Received: by outflank-mailman (input) for mailman id 1069164; Mon, 04 Aug 2025 12:43:26 +0000 Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50] helo=se1-gles-flk1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1uiuXG-0006V7-Rb for xen-devel@lists.xenproject.org; Mon, 04 Aug 2025 12:43:26 +0000 Received: from sea.source.kernel.org (sea.source.kernel.org [172.234.252.31]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS id 9bc209eb-7130-11f0-b898-0df219b8e170; Mon, 04 Aug 2025 14:43:24 +0200 (CEST) Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by sea.source.kernel.org (Postfix) with ESMTP id 2379744938; Mon, 4 Aug 2025 12:43:23 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id E414EC4CEE7; Mon, 4 Aug 2025 12:43:21 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: 9bc209eb-7130-11f0-b898-0df219b8e170 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1754311403; bh=G8yxKVCRAfrQOyuBUvZbEVDf24fip2a2X5oNwMglhAg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=FLzGyKpimqZsvNAMkhEPYN2tkYm9wQ3h3UDHZHmuFev7Kdqx8jKIWoT3U85iOoM1k aCTeOR/y2YoJkjQmGkC8nd0XlPwXp9ABDdnY25JZZV+pjTwrGx1BPI6jkGtfuWH3lH cNTOlfe37YMJ7EbolofEK6cSqjVoJzRKIgDX9Rw/r/pSkJ6tgXx0mQMDPbB50Eo/tY bSHgA0aJulzKtznrvuSvmEPJIYl7Z15YnjALIGKQEd37p0dEVK+inYE3ISZhcfRZD9 duEz6OdVIUPZ2RcrkaozjegB0lDsbcAcuJNUcsrkRMd4xE8ofRwae99SaFKf8HjV7j 9ur3CJw3QLkfA== From: Leon Romanovsky To: Marek Szyprowski Cc: Leon Romanovsky , Jason Gunthorpe , Abdiel Janulgue , Alexander Potapenko , Alex Gaynor , Andrew Morton , Christoph Hellwig , Danilo Krummrich , iommu@lists.linux.dev, Jason Wang , Jens Axboe , Joerg Roedel , Jonathan Corbet , Juergen Gross , kasan-dev@googlegroups.com, Keith Busch , linux-block@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-nvme@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-trace-kernel@vger.kernel.org, Madhavan Srinivasan , Masami Hiramatsu , Michael Ellerman , "Michael S. Tsirkin" , Miguel Ojeda , Robin Murphy , rust-for-linux@vger.kernel.org, Sagi Grimberg , Stefano Stabellini , Steven Rostedt , virtualization@lists.linux.dev, Will Deacon , xen-devel@lists.xenproject.org Subject: [PATCH v1 03/16] dma-debug: refactor to use physical addresses for page mapping Date: Mon, 4 Aug 2025 15:42:37 +0300 Message-ID: <9ba84c387ce67389cd80f374408eebb58326c448.1754292567.git.leon@kernel.org> X-Mailer: git-send-email 2.50.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @kernel.org) X-ZM-MESSAGEID: 1754311424571116600 Content-Type: text/plain; charset="utf-8" From: Leon Romanovsky Convert the DMA debug infrastructure from page-based to physical address-ba= sed mapping as a preparation to rely on physical address for DMA mapping routin= es. The refactoring renames debug_dma_map_page() to debug_dma_map_phys() and changes its signature to accept a phys_addr_t parameter instead of struct p= age and offset. Similarly, debug_dma_unmap_page() becomes debug_dma_unmap_phys(= ). A new dma_debug_phy type is introduced to distinguish physical address mapp= ings from other debug entry types. All callers throughout the codebase are updat= ed to pass physical addresses directly, eliminating the need for page-to-physi= cal conversion in the debug layer. This refactoring eliminates the need to convert between page pointers and physical addresses in the debug layer, making the code more efficient and consistent with the DMA mapping API's physical address focus. Signed-off-by: Leon Romanovsky Reviewed-by: Jason Gunthorpe --- Documentation/core-api/dma-api.rst | 4 ++-- kernel/dma/debug.c | 28 +++++++++++++++++----------- kernel/dma/debug.h | 16 +++++++--------- kernel/dma/mapping.c | 15 ++++++++------- 4 files changed, 34 insertions(+), 29 deletions(-) diff --git a/Documentation/core-api/dma-api.rst b/Documentation/core-api/dm= a-api.rst index 3087bea715ed2..ca75b35416792 100644 --- a/Documentation/core-api/dma-api.rst +++ b/Documentation/core-api/dma-api.rst @@ -761,7 +761,7 @@ example warning message may look like this:: [] find_busiest_group+0x207/0x8a0 [] _spin_lock_irqsave+0x1f/0x50 [] check_unmap+0x203/0x490 - [] debug_dma_unmap_page+0x49/0x50 + [] debug_dma_unmap_phys+0x49/0x50 [] nv_tx_done_optimized+0xc6/0x2c0 [] nv_nic_irq_optimized+0x73/0x2b0 [] handle_IRQ_event+0x34/0x70 @@ -855,7 +855,7 @@ that a driver may be leaking mappings. dma-debug interface debug_dma_mapping_error() to debug drivers that fail to check DMA mapping errors on addresses returned by dma_map_single() and dma_map_page() interfaces. This interface clears a flag set by -debug_dma_map_page() to indicate that dma_mapping_error() has been called = by +debug_dma_map_phys() to indicate that dma_mapping_error() has been called = by the driver. When driver does unmap, debug_dma_unmap() checks the flag and = if this flag is still set, prints warning message that includes call trace th= at leads up to the unmap. This interface can be called from dma_mapping_error= () diff --git a/kernel/dma/debug.c b/kernel/dma/debug.c index e43c6de2bce4e..da6734e3a4ce9 100644 --- a/kernel/dma/debug.c +++ b/kernel/dma/debug.c @@ -39,6 +39,7 @@ enum { dma_debug_sg, dma_debug_coherent, dma_debug_resource, + dma_debug_phy, }; =20 enum map_err_types { @@ -141,6 +142,7 @@ static const char *type2name[] =3D { [dma_debug_sg] =3D "scatter-gather", [dma_debug_coherent] =3D "coherent", [dma_debug_resource] =3D "resource", + [dma_debug_phy] =3D "phy", }; =20 static const char *dir2name[] =3D { @@ -1201,9 +1203,8 @@ void debug_dma_map_single(struct device *dev, const v= oid *addr, } EXPORT_SYMBOL(debug_dma_map_single); =20 -void debug_dma_map_page(struct device *dev, struct page *page, size_t offs= et, - size_t size, int direction, dma_addr_t dma_addr, - unsigned long attrs) +void debug_dma_map_phys(struct device *dev, phys_addr_t phys, size_t size, + int direction, dma_addr_t dma_addr, unsigned long attrs) { struct dma_debug_entry *entry; =20 @@ -1218,19 +1219,24 @@ void debug_dma_map_page(struct device *dev, struct = page *page, size_t offset, return; =20 entry->dev =3D dev; - entry->type =3D dma_debug_single; - entry->paddr =3D page_to_phys(page) + offset; + entry->type =3D dma_debug_phy; + entry->paddr =3D phys; entry->dev_addr =3D dma_addr; entry->size =3D size; entry->direction =3D direction; entry->map_err_type =3D MAP_ERR_NOT_CHECKED; =20 - check_for_stack(dev, page, offset); + if (!(attrs & DMA_ATTR_MMIO)) { + struct page *page =3D phys_to_page(phys); + size_t offset =3D offset_in_page(page); =20 - if (!PageHighMem(page)) { - void *addr =3D page_address(page) + offset; + check_for_stack(dev, page, offset); =20 - check_for_illegal_area(dev, addr, size); + if (!PageHighMem(page)) { + void *addr =3D page_address(page) + offset; + + check_for_illegal_area(dev, addr, size); + } } =20 add_dma_entry(entry, attrs); @@ -1274,11 +1280,11 @@ void debug_dma_mapping_error(struct device *dev, dm= a_addr_t dma_addr) } EXPORT_SYMBOL(debug_dma_mapping_error); =20 -void debug_dma_unmap_page(struct device *dev, dma_addr_t dma_addr, +void debug_dma_unmap_phys(struct device *dev, dma_addr_t dma_addr, size_t size, int direction) { struct dma_debug_entry ref =3D { - .type =3D dma_debug_single, + .type =3D dma_debug_phy, .dev =3D dev, .dev_addr =3D dma_addr, .size =3D size, diff --git a/kernel/dma/debug.h b/kernel/dma/debug.h index f525197d3cae6..76adb42bffd5f 100644 --- a/kernel/dma/debug.h +++ b/kernel/dma/debug.h @@ -9,12 +9,11 @@ #define _KERNEL_DMA_DEBUG_H =20 #ifdef CONFIG_DMA_API_DEBUG -extern void debug_dma_map_page(struct device *dev, struct page *page, - size_t offset, size_t size, - int direction, dma_addr_t dma_addr, +extern void debug_dma_map_phys(struct device *dev, phys_addr_t phys, + size_t size, int direction, dma_addr_t dma_addr, unsigned long attrs); =20 -extern void debug_dma_unmap_page(struct device *dev, dma_addr_t addr, +extern void debug_dma_unmap_phys(struct device *dev, dma_addr_t addr, size_t size, int direction); =20 extern void debug_dma_map_sg(struct device *dev, struct scatterlist *sg, @@ -55,14 +54,13 @@ extern void debug_dma_sync_sg_for_device(struct device = *dev, struct scatterlist *sg, int nelems, int direction); #else /* CONFIG_DMA_API_DEBUG */ -static inline void debug_dma_map_page(struct device *dev, struct page *pag= e, - size_t offset, size_t size, - int direction, dma_addr_t dma_addr, - unsigned long attrs) +static inline void debug_dma_map_phys(struct device *dev, phys_addr_t phys, + size_t size, int direction, + dma_addr_t dma_addr, unsigned long attrs) { } =20 -static inline void debug_dma_unmap_page(struct device *dev, dma_addr_t add= r, +static inline void debug_dma_unmap_phys(struct device *dev, dma_addr_t add= r, size_t size, int direction) { } diff --git a/kernel/dma/mapping.c b/kernel/dma/mapping.c index 107e4a4d251df..4c1dfbabb8ae5 100644 --- a/kernel/dma/mapping.c +++ b/kernel/dma/mapping.c @@ -157,6 +157,7 @@ dma_addr_t dma_map_page_attrs(struct device *dev, struc= t page *page, unsigned long attrs) { const struct dma_map_ops *ops =3D get_dma_ops(dev); + phys_addr_t phys =3D page_to_phys(page) + offset; dma_addr_t addr; =20 BUG_ON(!valid_dma_direction(dir)); @@ -165,16 +166,15 @@ dma_addr_t dma_map_page_attrs(struct device *dev, str= uct page *page, return DMA_MAPPING_ERROR; =20 if (dma_map_direct(dev, ops) || - arch_dma_map_page_direct(dev, page_to_phys(page) + offset + size)) + arch_dma_map_page_direct(dev, phys + size)) addr =3D dma_direct_map_page(dev, page, offset, size, dir, attrs); else if (use_dma_iommu(dev)) addr =3D iommu_dma_map_page(dev, page, offset, size, dir, attrs); else addr =3D ops->map_page(dev, page, offset, size, dir, attrs); kmsan_handle_dma(page, offset, size, dir); - trace_dma_map_page(dev, page_to_phys(page) + offset, addr, size, dir, - attrs); - debug_dma_map_page(dev, page, offset, size, dir, addr, attrs); + trace_dma_map_page(dev, phys, addr, size, dir, attrs); + debug_dma_map_phys(dev, phys, size, dir, addr, attrs); =20 return addr; } @@ -194,7 +194,7 @@ void dma_unmap_page_attrs(struct device *dev, dma_addr_= t addr, size_t size, else ops->unmap_page(dev, addr, size, dir, attrs); trace_dma_unmap_page(dev, addr, size, dir, attrs); - debug_dma_unmap_page(dev, addr, size, dir); + debug_dma_unmap_phys(dev, addr, size, dir); } EXPORT_SYMBOL(dma_unmap_page_attrs); =20 @@ -712,7 +712,8 @@ struct page *dma_alloc_pages(struct device *dev, size_t= size, if (page) { trace_dma_alloc_pages(dev, page_to_virt(page), *dma_handle, size, dir, gfp, 0); - debug_dma_map_page(dev, page, 0, size, dir, *dma_handle, 0); + debug_dma_map_phys(dev, page_to_phys(page), size, dir, + *dma_handle, 0); } else { trace_dma_alloc_pages(dev, NULL, 0, size, dir, gfp, 0); } @@ -738,7 +739,7 @@ void dma_free_pages(struct device *dev, size_t size, st= ruct page *page, dma_addr_t dma_handle, enum dma_data_direction dir) { trace_dma_free_pages(dev, page_to_virt(page), dma_handle, size, dir, 0); - debug_dma_unmap_page(dev, dma_handle, size, dir); + debug_dma_unmap_phys(dev, dma_handle, size, dir); __dma_free_pages(dev, size, page, dma_handle, dir); } EXPORT_SYMBOL_GPL(dma_free_pages); --=20 2.50.1