From nobody Sun Sep 14 08:41:15 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=quarantine dis=none) header.from=kernel.org ARC-Seal: i=1; a=rsa-sha256; t=1757424541; cv=none; d=zohomail.com; s=zohoarc; b=gSumH3SKe6ZQ9rOxWmK0patd2R9Ps+xxdGnQOXZjYFVkr/CJltdg8njZ1ck/dJrmRFtVz1A0mwFEs3SiR9f/wJYcs6TNmQq3bAxnGE41LcNoQvpBsWMfe2XuCjr/2A3KJJUSK9/lCtWPAAHwzpe9rv9U7ntLLPrRPV9yirGbVhI= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1757424541; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=RXCiWDrz08Ry9Yhr+EUVani0x1mu5TIrqAP93p3Vkfg=; b=TtQezU48C3Bya9onHo8JARM6QBH9bUSu4ue1m1rSah0FwZFXmRd+CSJmFRJmLs4ni+OsNSAT7Cs3Y1MYLz1fETJjAHogq9f2CND5SHMCqhB4oREzr08A7gdKxaCBumf5h1MbBpoSkTewTqTTG6Fknbl6OffLUM5rYQdsfOIthTM= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=quarantine dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1757424541968947.051610780315; Tue, 9 Sep 2025 06:29:01 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.1116772.1463061 (Exim 4.92) (envelope-from ) id 1uvyOk-0007HT-Fy; Tue, 09 Sep 2025 13:28:38 +0000 Received: by outflank-mailman (output) from mailman id 1116772.1463061; Tue, 09 Sep 2025 13:28:38 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1uvyOk-0007HI-DB; Tue, 09 Sep 2025 13:28:38 +0000 Received: by outflank-mailman (input) for mailman id 1116772; Tue, 09 Sep 2025 13:28:37 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1uvyOj-00067u-47 for xen-devel@lists.xenproject.org; Tue, 09 Sep 2025 13:28:37 +0000 Received: from tor.source.kernel.org (tor.source.kernel.org [172.105.4.254]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id e35b5cde-8d80-11f0-9d13-b5c5bf9af7f9; Tue, 09 Sep 2025 15:28:36 +0200 (CEST) Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by tor.source.kernel.org (Postfix) with ESMTP id 9A8146021E; Tue, 9 Sep 2025 13:28:35 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9385AC4CEF4; Tue, 9 Sep 2025 13:28:34 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: e35b5cde-8d80-11f0-9d13-b5c5bf9af7f9 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1757424515; bh=A63sCU1us486CfnvbybDoy2DvC4mWvbNv9XH5e8Grdg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Wi8yN8n22gc8EHttTIy6nTDZdyyXoXQ/Pldkc75H1b6hLRfWF7W91QongjbmQvASx sv6rDJHl2MJSbLxPwGa3dmCMbrX9j01Em95NpNXM4cIDH7UJTEwX83geF/iBTaLFuP z9936grIMWYOudFZFyWQaaATwNqlVd7G9UV/xV6GFF/aAOUbPO06Q/ST7pKHRaUbg7 zvG59sLnQHkzA0x5NeN55UQGwo5PwCywmKPSo//7gII1NKJ57XsBVOqW3xGebHV5Ho GeffiYBcLO2jLid5aJ6HsuI1v5dL0okkwlGaoFgMfg5DtNWLYRKj5r9JHHMSD8Ink5 CRsGJPvVMTubA== From: Leon Romanovsky To: Marek Szyprowski Cc: Leon Romanovsky , Jason Gunthorpe , Abdiel Janulgue , Alexander Potapenko , Alex Gaynor , Andrew Morton , Christoph Hellwig , Danilo Krummrich , David Hildenbrand , iommu@lists.linux.dev, Jason Wang , Jens Axboe , Joerg Roedel , Jonathan Corbet , Juergen Gross , kasan-dev@googlegroups.com, Keith Busch , linux-block@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-nvme@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-trace-kernel@vger.kernel.org, Madhavan Srinivasan , Masami Hiramatsu , Michael Ellerman , "Michael S. Tsirkin" , Miguel Ojeda , Robin Murphy , rust-for-linux@vger.kernel.org, Sagi Grimberg , Stefano Stabellini , Steven Rostedt , virtualization@lists.linux.dev, Will Deacon , xen-devel@lists.xenproject.org Subject: [PATCH v6 06/16] iommu/dma: implement DMA_ATTR_MMIO for iommu_dma_(un)map_phys() Date: Tue, 9 Sep 2025 16:27:34 +0300 Message-ID: X-Mailer: git-send-email 2.51.0 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @kernel.org) X-ZM-MESSAGEID: 1757424543898116600 Content-Type: text/plain; charset="utf-8" From: Leon Romanovsky Make iommu_dma_map_phys() and iommu_dma_unmap_phys() respect DMA_ATTR_MMIO. DMA_ATTR_MMIO makes the functions behave the same as iommu_dma_(un)map_resource(): - No swiotlb is possible - No cache flushing is done (ATTR_MMIO should not be cached memory) - prot for iommu_map() has IOMMU_MMIO not IOMMU_CACHE This is preparation for replacing iommu_dma_map_resource() callers with iommu_dma_map_phys(DMA_ATTR_MMIO) and removing iommu_dma_(un)map_resource(). Reviewed-by: Jason Gunthorpe Signed-off-by: Leon Romanovsky --- drivers/iommu/dma-iommu.c | 15 +++++++++++---- 1 file changed, 11 insertions(+), 4 deletions(-) diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c index aea119f32f965..6804aaf034a16 100644 --- a/drivers/iommu/dma-iommu.c +++ b/drivers/iommu/dma-iommu.c @@ -1211,16 +1211,19 @@ dma_addr_t iommu_dma_map_phys(struct device *dev, p= hys_addr_t phys, size_t size, */ if (dev_use_swiotlb(dev, size, dir) && iova_unaligned(iovad, phys, size)) { + if (attrs & DMA_ATTR_MMIO) + return DMA_MAPPING_ERROR; + phys =3D iommu_dma_map_swiotlb(dev, phys, size, dir, attrs); if (phys =3D=3D (phys_addr_t)DMA_MAPPING_ERROR) return DMA_MAPPING_ERROR; } =20 - if (!coherent && !(attrs & DMA_ATTR_SKIP_CPU_SYNC)) + if (!coherent && !(attrs & (DMA_ATTR_SKIP_CPU_SYNC | DMA_ATTR_MMIO))) arch_sync_dma_for_device(phys, size, dir); =20 iova =3D __iommu_dma_map(dev, phys, size, prot, dma_mask); - if (iova =3D=3D DMA_MAPPING_ERROR) + if (iova =3D=3D DMA_MAPPING_ERROR && !(attrs & DMA_ATTR_MMIO)) swiotlb_tbl_unmap_single(dev, phys, size, dir, attrs); return iova; } @@ -1228,10 +1231,14 @@ dma_addr_t iommu_dma_map_phys(struct device *dev, p= hys_addr_t phys, size_t size, void iommu_dma_unmap_phys(struct device *dev, dma_addr_t dma_handle, size_t size, enum dma_data_direction dir, unsigned long attrs) { - struct iommu_domain *domain =3D iommu_get_dma_domain(dev); phys_addr_t phys; =20 - phys =3D iommu_iova_to_phys(domain, dma_handle); + if (attrs & DMA_ATTR_MMIO) { + __iommu_dma_unmap(dev, dma_handle, size); + return; + } + + phys =3D iommu_iova_to_phys(iommu_get_dma_domain(dev), dma_handle); if (WARN_ON(!phys)) return; =20 --=20 2.51.0