From nobody Sun Oct 5 14:37:46 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) client-ip=192.237.175.120; envelope-from=xen-devel-bounces@lists.xenproject.org; helo=lists.xenproject.org; Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass(p=quarantine dis=none) header.from=kernel.org ARC-Seal: i=1; a=rsa-sha256; t=1754311977; cv=none; d=zohomail.com; s=zohoarc; b=YT0s5UIe7mVxTIrTT+kniYYWsegA2+E4iqAllbPuFhzxHXmqyfiB6MZYGOcXZoVOppi+nT/9GjOgEZGh24M2o9Kp2vwisD31PJyqPFOhbMk4hvhivRSUClK8B4TbYINpdNOluqHqv4l8H4rHjB8HSubAyAim+kIcJCivsAmpeF0= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1754311977; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=rR472IjdMKe4w8RfO8gWCXyzwltO7zaZbCaEciZCZH8=; b=RyCw4HaHFVsRz0MJ/SfZWxdXGl8U/f5L7LBS4ufLbNLnkn98aGwFMY8XSepmkayuBt2A+mUDlWrPMKGkUoc5yloKZEw1Y3LWa6ujN2hw67TeV0FTV9ENVH2RQamvKHdlmD6/165kjzNjO0765J2MT6xj7yoqqqkIWwikOneXdfs= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of lists.xenproject.org designates 192.237.175.120 as permitted sender) smtp.mailfrom=xen-devel-bounces@lists.xenproject.org; dmarc=pass header.from= (p=quarantine dis=none) Return-Path: Received: from lists.xenproject.org (lists.xenproject.org [192.237.175.120]) by mx.zohomail.com with SMTPS id 1754311977534533.2584087392343; Mon, 4 Aug 2025 05:52:57 -0700 (PDT) Received: from list by lists.xenproject.org with outflank-mailman.1069273.1433145 (Exim 4.92) (envelope-from ) id 1uiugG-00080F-AU; Mon, 04 Aug 2025 12:52:44 +0000 Received: by outflank-mailman (output) from mailman id 1069273.1433145; Mon, 04 Aug 2025 12:52:44 +0000 Received: from localhost ([127.0.0.1] helo=lists.xenproject.org) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1uiugG-000808-7V; Mon, 04 Aug 2025 12:52:44 +0000 Received: by outflank-mailman (input) for mailman id 1069273; Mon, 04 Aug 2025 12:52:42 +0000 Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254] helo=se1-gles-sth1.inumbo.com) by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from ) id 1uiuYM-0006VD-En for xen-devel@lists.xenproject.org; Mon, 04 Aug 2025 12:44:34 +0000 Received: from nyc.source.kernel.org (nyc.source.kernel.org [147.75.193.91]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS id c5682827-7130-11f0-a321-13f23c93f187; Mon, 04 Aug 2025 14:44:33 +0200 (CEST) Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58]) by nyc.source.kernel.org (Postfix) with ESMTP id 10CE7A5586C; Mon, 4 Aug 2025 12:44:33 +0000 (UTC) Received: by smtp.kernel.org (Postfix) with ESMTPSA id DBBA4C4CEE7; Mon, 4 Aug 2025 12:44:31 +0000 (UTC) X-Outflank-Mailman: Message body and most headers restored to incoming version X-BeenThere: xen-devel@lists.xenproject.org List-Id: Xen developer discussion List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Errors-To: xen-devel-bounces@lists.xenproject.org Precedence: list Sender: "Xen-devel" X-Inumbo-ID: c5682827-7130-11f0-a321-13f23c93f187 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1754311472; bh=Aje2MY5jLtJJ+XC6L21HodqK1MreNzlcX3KM+Qptt24=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=D+O9BFW9G3EUN02UNr+9W1wRCdRDfRkf0x3nKVoSKF3qgFSYQkjSJdutDyO+/GaXc LGLcj33SDyJqVULCEM37MXdFVYsGALxSAZY0nHbzSQZTnFerxjcv8C6TboBXWH4naX 6ZVZnkDzbfO3eKo22J8zcBZLzmo4T9kxSkmi7f07Z+UGaRwrQIMopajJOg4vYfLx6+ jnSQUAfHfWhVCjNaPp3bPAtogKMoVHj2Hvn4dfigZthAzfffhOXJp1Yoe3nvVjUnC/ ShnNJR9ewOfC+QlOtWiwtM5Xx2GVbLWurX258r+SmOzIeAJTmd4uYs98RHNcdlwqdj Y4q80UzvzmD6g== From: Leon Romanovsky To: Marek Szyprowski Cc: Leon Romanovsky , Jason Gunthorpe , Abdiel Janulgue , Alexander Potapenko , Alex Gaynor , Andrew Morton , Christoph Hellwig , Danilo Krummrich , iommu@lists.linux.dev, Jason Wang , Jens Axboe , Joerg Roedel , Jonathan Corbet , Juergen Gross , kasan-dev@googlegroups.com, Keith Busch , linux-block@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-nvme@lists.infradead.org, linuxppc-dev@lists.ozlabs.org, linux-trace-kernel@vger.kernel.org, Madhavan Srinivasan , Masami Hiramatsu , Michael Ellerman , "Michael S. Tsirkin" , Miguel Ojeda , Robin Murphy , rust-for-linux@vger.kernel.org, Sagi Grimberg , Stefano Stabellini , Steven Rostedt , virtualization@lists.linux.dev, Will Deacon , xen-devel@lists.xenproject.org Subject: [PATCH v1 15/16] block-dma: properly take MMIO path Date: Mon, 4 Aug 2025 15:42:49 +0300 Message-ID: X-Mailer: git-send-email 2.50.1 In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ZohoMail-DKIM: pass (identity @kernel.org) X-ZM-MESSAGEID: 1754311979766116600 Content-Type: text/plain; charset="utf-8" From: Leon Romanovsky Make sure that CPU is not synced and IOMMU is configured to take MMIO path by providing newly introduced DMA_ATTR_MMIO attribute. Signed-off-by: Leon Romanovsky --- block/blk-mq-dma.c | 13 +++++++++++-- include/linux/blk-mq-dma.h | 6 +++++- include/linux/blk_types.h | 2 ++ 3 files changed, 18 insertions(+), 3 deletions(-) diff --git a/block/blk-mq-dma.c b/block/blk-mq-dma.c index 37e2142be4f7d..d415088ed9fd2 100644 --- a/block/blk-mq-dma.c +++ b/block/blk-mq-dma.c @@ -87,8 +87,13 @@ static bool blk_dma_map_bus(struct blk_dma_iter *iter, s= truct phys_vec *vec) static bool blk_dma_map_direct(struct request *req, struct device *dma_dev, struct blk_dma_iter *iter, struct phys_vec *vec) { + unsigned int attrs =3D 0; + + if (req->cmd_flags & REQ_MMIO) + attrs =3D DMA_ATTR_MMIO; + iter->addr =3D dma_map_phys(dma_dev, vec->paddr, vec->len, - rq_dma_dir(req), 0); + rq_dma_dir(req), attrs); if (dma_mapping_error(dma_dev, iter->addr)) { iter->status =3D BLK_STS_RESOURCE; return false; @@ -103,14 +108,17 @@ static bool blk_rq_dma_map_iova(struct request *req, = struct device *dma_dev, { enum dma_data_direction dir =3D rq_dma_dir(req); unsigned int mapped =3D 0; + unsigned int attrs =3D 0; int error; =20 iter->addr =3D state->addr; iter->len =3D dma_iova_size(state); + if (req->cmd_flags & REQ_MMIO) + attrs =3D DMA_ATTR_MMIO; =20 do { error =3D dma_iova_link(dma_dev, state, vec->paddr, mapped, - vec->len, dir, 0); + vec->len, dir, attrs); if (error) break; mapped +=3D vec->len; @@ -176,6 +184,7 @@ bool blk_rq_dma_map_iter_start(struct request *req, str= uct device *dma_dev, * same as non-P2P transfers below and during unmap. */ req->cmd_flags &=3D ~REQ_P2PDMA; + req->cmd_flags |=3D REQ_MMIO; break; default: iter->status =3D BLK_STS_INVAL; diff --git a/include/linux/blk-mq-dma.h b/include/linux/blk-mq-dma.h index c26a01aeae006..6c55f5e585116 100644 --- a/include/linux/blk-mq-dma.h +++ b/include/linux/blk-mq-dma.h @@ -48,12 +48,16 @@ static inline bool blk_rq_dma_map_coalesce(struct dma_i= ova_state *state) static inline bool blk_rq_dma_unmap(struct request *req, struct device *dm= a_dev, struct dma_iova_state *state, size_t mapped_len) { + unsigned int attrs =3D 0; + if (req->cmd_flags & REQ_P2PDMA) return true; =20 if (dma_use_iova(state)) { + if (req->cmd_flags & REQ_MMIO) + attrs =3D DMA_ATTR_MMIO; dma_iova_destroy(dma_dev, state, mapped_len, rq_dma_dir(req), - 0); + attrs); return true; } =20 diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h index 09b99d52fd365..283058bcb5b14 100644 --- a/include/linux/blk_types.h +++ b/include/linux/blk_types.h @@ -387,6 +387,7 @@ enum req_flag_bits { __REQ_FS_PRIVATE, /* for file system (submitter) use */ __REQ_ATOMIC, /* for atomic write operations */ __REQ_P2PDMA, /* contains P2P DMA pages */ + __REQ_MMIO, /* contains MMIO memory */ /* * Command specific flags, keep last: */ @@ -420,6 +421,7 @@ enum req_flag_bits { #define REQ_FS_PRIVATE (__force blk_opf_t)(1ULL << __REQ_FS_PRIVATE) #define REQ_ATOMIC (__force blk_opf_t)(1ULL << __REQ_ATOMIC) #define REQ_P2PDMA (__force blk_opf_t)(1ULL << __REQ_P2PDMA) +#define REQ_MMIO (__force blk_opf_t)(1ULL << __REQ_MMIO) =20 #define REQ_NOUNMAP (__force blk_opf_t)(1ULL << __REQ_NOUNMAP) =20 --=20 2.50.1