From nobody Sat Feb 7 08:45:13 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 31E1C33468F; Fri, 17 Oct 2025 05:32:18 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1760679139; cv=none; b=GG5buHUz1+M4G4QLs0a0y5bbBIgKcZE17Iaod+H1eii07autmGtv6GG2dW5Oq/aHfO0i+lWUgVlDgJgI4IHsc7gB+Z3uepfkDKKHFhYmpwcCH8pbQ1DyzqndE7Vf/j3swKGEOs8+KBadsNf7++9+xExPF9qskP+DbJgDz+lms+8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1760679139; c=relaxed/simple; bh=XPHX/eHvGPXhe7Cbw4X4FLHQozypPe//xTo6GQUz20c=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=gcr2mVwpu28sY0eofzL2MgFRmCOk7NDVKKIrXx9J+bdOJjqCsZQLwgzc4nl94zKoHdWglrqALSIv9VyEl9NhKZZl2LxtQDrt4Gj5XP8HIHZ7U8Qf8+I/+rQ4ii0XQIxpVQRIIx+b7FfdbXwge/L0qtOKviFSXVUwvfb+f8dwgXU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=gCjpkbiS; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="gCjpkbiS" Received: by smtp.kernel.org (Postfix) with ESMTPSA id AF116C113D0; Fri, 17 Oct 2025 05:32:17 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1760679138; bh=XPHX/eHvGPXhe7Cbw4X4FLHQozypPe//xTo6GQUz20c=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=gCjpkbiShSqT4gKgh1M2MEZbEZ2Vc/DNTaksMsNyGFGAB1tAEABTaYT2P/W8Ext8E 2sg1w2LFPRf+it0F42GOcnS+m75CoSGR2se6be0FYCunVEjzyOPeWtx5a0rDWv8Lg0 9HwGZIUIQL2t+K8t1djhHWLDh0fl74n7CXM1TzH8bqOZc25h92JFjisC7Gcel+0j2A zU1+0C2NvTDryR7yu1HXF/cbyf7Em6FwtN95gzi4QgvGtquhvsSMFadA+g1bgQB3+f 8wrqPc4hdSmyuHfOey1noWZTc6v8WB6cpXFsWtiE2zaD4UoHr/5i6KCgG1YwOdICtT VAvhY7B70Yl5Q== From: Leon Romanovsky To: Jens Axboe , Keith Busch , Christoph Hellwig , Sagi Grimberg Cc: linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org Subject: [PATCH 1/3] blk-mq-dma: migrate to dma_map_phys instead of map_page Date: Fri, 17 Oct 2025 08:31:58 +0300 Message-ID: <20251017-block-with-mmio-v1-1-3f486904db5e@nvidia.com> X-Mailer: git-send-email 2.51.0 In-Reply-To: <20251017-block-with-mmio-v1-0-3f486904db5e@nvidia.com> References: <20251017-block-with-mmio-v1-0-3f486904db5e@nvidia.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" X-Mailer: b4 0.15-dev Content-Transfer-Encoding: quoted-printable From: Leon Romanovsky After introduction of dma_map_phys(), there is no need to convert from physical address to struct page in order to map page. So let's use it directly. Reviewed-by: Keith Busch Signed-off-by: Leon Romanovsky --- block/blk-mq-dma.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/block/blk-mq-dma.c b/block/blk-mq-dma.c index 449950029872..4ba7b0323da4 100644 --- a/block/blk-mq-dma.c +++ b/block/blk-mq-dma.c @@ -93,8 +93,8 @@ static bool blk_dma_map_bus(struct blk_dma_iter *iter, st= ruct phys_vec *vec) static bool blk_dma_map_direct(struct request *req, struct device *dma_dev, struct blk_dma_iter *iter, struct phys_vec *vec) { - iter->addr =3D dma_map_page(dma_dev, phys_to_page(vec->paddr), - offset_in_page(vec->paddr), vec->len, rq_dma_dir(req)); + iter->addr =3D dma_map_phys(dma_dev, vec->paddr, vec->len, + rq_dma_dir(req), 0); if (dma_mapping_error(dma_dev, iter->addr)) { iter->status =3D BLK_STS_RESOURCE; return false; --=20 2.51.0 From nobody Sat Feb 7 08:45:13 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C017D33468F; Fri, 17 Oct 2025 05:32:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1760679135; cv=none; b=fqQK4trfAjlsC2/bQbbqwUBr9V9kKUoEjcdG0+8l2nDanCqieH7aNNZMc6EZ2N37xAz5aA2+5tSISOxhoiJIsQ/XMRd9vU4CJ2ZJP0fGcCLg0igKQ04MvfNaHOcOfhl1l3Fj3P8keM05ZuOd0sxOaIORmQERaBOMXgmtcTxI68I= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1760679135; c=relaxed/simple; bh=OpQMzB0nbAikwEUSyrTHgexMEqRPz+w6Ig5EcaTjQpM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=Gm19Q5a30pwPr/bsWmenNKz1T5XX+2ePp/jvfDVIBmG1cBeGsVNHDwHSfHubC+6Rf6P9FpSaeGJnBqq7tt+bGfJbt5ItMGo76gWopJ23Bjf51d/tzIpQQjpn8NcgDp0Qt6iXn5/2Pgolu10PQ4RtmYCq5RgyWp6U2x7xEOBynQI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=oUM1GpBb; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="oUM1GpBb" Received: by smtp.kernel.org (Postfix) with ESMTPSA id DE324C4CEE7; Fri, 17 Oct 2025 05:32:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1760679134; bh=OpQMzB0nbAikwEUSyrTHgexMEqRPz+w6Ig5EcaTjQpM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=oUM1GpBbrGzpY2Jrl/xtzB5Oyh5OU5OZdGN6XE9sfqSsGy+oMMYeIMxzWEY/2MT8l AnKE4QXwhdS6vBhSTQ2rCLKYgXDwDJFDdNM2h1sMyY1VR+lXBCALudTSZFR/cyvBmH +N2bE33Mk6vxd5NZ+cu+R5uQ3wtPS9fkVX0SIDaXalkclSzIlht+HNmq3fY/vSH5qn CaZk8595crpdBFs7AAV/TcbtdqWHqfMEhVSINA2AALZDJ6iYGiu+cHe8d3SOsjNQLg 0zaehXlK8KHPSCWUDL4ZkaHKXsCIfUUYYjrUQdaFBAVQh05eDTZvnY6AtcGmmysbNs xT3r46MEIh9Wg== From: Leon Romanovsky To: Jens Axboe , Keith Busch , Christoph Hellwig , Sagi Grimberg Cc: linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org Subject: [PATCH 2/3] nvme-pci: unmap MMIO pages with appropriate interface Date: Fri, 17 Oct 2025 08:31:59 +0300 Message-ID: <20251017-block-with-mmio-v1-2-3f486904db5e@nvidia.com> X-Mailer: git-send-email 2.51.0 In-Reply-To: <20251017-block-with-mmio-v1-0-3f486904db5e@nvidia.com> References: <20251017-block-with-mmio-v1-0-3f486904db5e@nvidia.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" X-Mailer: b4 0.15-dev Content-Transfer-Encoding: quoted-printable From: Leon Romanovsky Block layer maps MMIO memory through dma_map_phys() interface with help of DMA_ATTR_MMIO attribute. There is a need to unmap that memory with the appropriate unmap function, something which wasn't possible before adding new REQ attribute to block layer in previous patch. Reviewed-by: Keith Busch Signed-off-by: Leon Romanovsky --- drivers/nvme/host/pci.c | 18 +++++++++++++----- 1 file changed, 13 insertions(+), 5 deletions(-) diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c index c916176bd9f0..2e9fb3c7bc09 100644 --- a/drivers/nvme/host/pci.c +++ b/drivers/nvme/host/pci.c @@ -689,11 +689,15 @@ static void nvme_free_prps(struct request *req) { struct nvme_iod *iod =3D blk_mq_rq_to_pdu(req); struct nvme_queue *nvmeq =3D req->mq_hctx->driver_data; + unsigned int attrs =3D 0; unsigned int i; =20 + if (req->cmd_flags & REQ_MMIO) + attrs |=3D DMA_ATTR_MMIO; + for (i =3D 0; i < iod->nr_dma_vecs; i++) - dma_unmap_page(nvmeq->dev->dev, iod->dma_vecs[i].addr, - iod->dma_vecs[i].len, rq_dma_dir(req)); + dma_unmap_phys(nvmeq->dev->dev, iod->dma_vecs[i].addr, + iod->dma_vecs[i].len, rq_dma_dir(req), attrs); mempool_free(iod->dma_vecs, nvmeq->dev->dmavec_mempool); } =20 @@ -704,16 +708,20 @@ static void nvme_free_sgls(struct request *req, struc= t nvme_sgl_desc *sge, enum dma_data_direction dir =3D rq_dma_dir(req); unsigned int len =3D le32_to_cpu(sge->length); struct device *dma_dev =3D nvmeq->dev->dev; + unsigned int attrs =3D 0; unsigned int i; =20 + if (req->cmd_flags & REQ_MMIO) + attrs |=3D DMA_ATTR_MMIO; + if (sge->type =3D=3D (NVME_SGL_FMT_DATA_DESC << 4)) { - dma_unmap_page(dma_dev, le64_to_cpu(sge->addr), len, dir); + dma_unmap_phys(dma_dev, le64_to_cpu(sge->addr), len, dir, attrs); return; } =20 for (i =3D 0; i < len / sizeof(*sg_list); i++) - dma_unmap_page(dma_dev, le64_to_cpu(sg_list[i].addr), - le32_to_cpu(sg_list[i].length), dir); + dma_unmap_phys(dma_dev, le64_to_cpu(sg_list[i].addr), + le32_to_cpu(sg_list[i].length), dir, attrs); } =20 static void nvme_unmap_metadata(struct request *req) --=20 2.51.0 From nobody Sat Feb 7 08:45:13 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 79AC033468F; Fri, 17 Oct 2025 05:32:22 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1760679142; cv=none; b=uEOFgB07Y6RgacSh9zrt/SkA/sya50fjH1VxBVBMjqfr7e9v8mEZPrUfts25JZymToI8IGo0oX+ckLUlWy/38TSOPEU7/R0nzlVVi0XBPalnbB96xJIKkJjHrhdcGErwK13uqbn5sDmPVCz1GwUXebllaNqwtRHT3yVyIcQjbxw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1760679142; c=relaxed/simple; bh=+4Tk3wM6BSkHKqGN1nHxd6sRm5b99SgBMl22sbkCveo=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=H5THlw6iRz08hCxgUrrk41UqE/N1d7fi3ncOedrmFS04bURWG7jzkFnNUpRnpB3VpVEs/XnB4FK7Uyc5SIqBIPJWCyhQXldBkxXpLXDcXpDy2I0IQPgBpXl1DrupP+UKg1ble0sVsZpOExATkkiosZOvnz3plQR0gdnL2TqCpFM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=Tk1dafpy; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="Tk1dafpy" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 77D1AC4CEE7; Fri, 17 Oct 2025 05:32:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1760679142; bh=+4Tk3wM6BSkHKqGN1nHxd6sRm5b99SgBMl22sbkCveo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Tk1dafpyL1RMMBd6Ph1HYZxwP+qDCKOxCLp7P/HuNWsWsa/1m7wol3XDVAqLQPPZH EBHuvQ0Z3Q9ltRVYXN53ZDnAXDttR9HpPSEsHas5SSUCZ28nm2n4wH7IN34WOvebtm 39nDTFr25MtWNaxk2z/rGmIlyKnCOU1Fbzkey3OAeVsq6WQIeaaq0vRVQvBo8gfFO7 bbBry4lOFz4HBUV2eNU/lH3PeXwmUavOPfqkTj1yW4DB8piFLU1D8gzcLEhjDdxBoa qTDm9im5eWpHrcv5HehK6MAEdzzaNfdQff37cC9LwjEpjwB/XY2n/EfV0E/bW2rXps NzjVs0apSQyRw== From: Leon Romanovsky To: Jens Axboe , Keith Busch , Christoph Hellwig , Sagi Grimberg Cc: linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org Subject: [PATCH 3/3] block-dma: properly take MMIO path Date: Fri, 17 Oct 2025 08:32:00 +0300 Message-ID: <20251017-block-with-mmio-v1-3-3f486904db5e@nvidia.com> X-Mailer: git-send-email 2.51.0 In-Reply-To: <20251017-block-with-mmio-v1-0-3f486904db5e@nvidia.com> References: <20251017-block-with-mmio-v1-0-3f486904db5e@nvidia.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" X-Mailer: b4 0.15-dev Content-Transfer-Encoding: quoted-printable From: Leon Romanovsky Make sure that CPU is not synced and IOMMU is configured to take MMIO path by providing newly introduced DMA_ATTR_MMIO attribute. Signed-off-by: Leon Romanovsky --- block/blk-mq-dma.c | 10 ++++++++-- include/linux/bio-integrity.h | 1 + include/linux/blk-integrity.h | 3 ++- include/linux/blk-mq-dma.h | 14 +++++++++++--- include/linux/blk_types.h | 2 ++ 5 files changed, 24 insertions(+), 6 deletions(-) diff --git a/block/blk-mq-dma.c b/block/blk-mq-dma.c index 4ba7b0323da4..e1f460da95d7 100644 --- a/block/blk-mq-dma.c +++ b/block/blk-mq-dma.c @@ -94,7 +94,7 @@ static bool blk_dma_map_direct(struct request *req, struc= t device *dma_dev, struct blk_dma_iter *iter, struct phys_vec *vec) { iter->addr =3D dma_map_phys(dma_dev, vec->paddr, vec->len, - rq_dma_dir(req), 0); + rq_dma_dir(req), iter->iter.attrs); if (dma_mapping_error(dma_dev, iter->addr)) { iter->status =3D BLK_STS_RESOURCE; return false; @@ -116,7 +116,7 @@ static bool blk_rq_dma_map_iova(struct request *req, st= ruct device *dma_dev, =20 do { error =3D dma_iova_link(dma_dev, state, vec->paddr, mapped, - vec->len, dir, 0); + vec->len, dir, iter->iter.attrs); if (error) break; mapped +=3D vec->len; @@ -184,6 +184,12 @@ static bool blk_dma_map_iter_start(struct request *req= , struct device *dma_dev, * P2P transfers through the host bridge are treated the * same as non-P2P transfers below and during unmap. */ + if (iter->iter.is_integrity) + bio_integrity(req->bio)->bip_flags |=3D BIP_MMIO; + else + req->cmd_flags |=3D REQ_MMIO; + iter->iter.attrs |=3D DMA_ATTR_MMIO; + fallthrough; case PCI_P2PDMA_MAP_NONE: break; default: diff --git a/include/linux/bio-integrity.h b/include/linux/bio-integrity.h index 851254f36eb3..b77b2cfb7b0f 100644 --- a/include/linux/bio-integrity.h +++ b/include/linux/bio-integrity.h @@ -14,6 +14,7 @@ enum bip_flags { BIP_CHECK_REFTAG =3D 1 << 6, /* reftag check */ BIP_CHECK_APPTAG =3D 1 << 7, /* apptag check */ BIP_P2P_DMA =3D 1 << 8, /* using P2P address */ + BIP_MMIO =3D 1 << 9, /* contains MMIO memory */ }; =20 struct bio_integrity_payload { diff --git a/include/linux/blk-integrity.h b/include/linux/blk-integrity.h index b659373788f6..34648d6c14d7 100644 --- a/include/linux/blk-integrity.h +++ b/include/linux/blk-integrity.h @@ -33,7 +33,8 @@ static inline bool blk_rq_integrity_dma_unmap(struct requ= est *req, size_t mapped_len) { return blk_dma_unmap(req, dma_dev, state, mapped_len, - bio_integrity(req->bio)->bip_flags & BIP_P2P_DMA); + bio_integrity(req->bio)->bip_flags & BIP_P2P_DMA, + bio_integrity(req->bio)->bip_flags & BIP_MMIO); } =20 int blk_rq_count_integrity_sg(struct request_queue *, struct bio *); diff --git a/include/linux/blk-mq-dma.h b/include/linux/blk-mq-dma.h index 51829958d872..916ca1deaf2c 100644 --- a/include/linux/blk-mq-dma.h +++ b/include/linux/blk-mq-dma.h @@ -10,6 +10,7 @@ struct blk_map_iter { struct bio *bio; struct bio_vec *bvecs; bool is_integrity; + unsigned int attrs; }; =20 struct blk_dma_iter { @@ -49,19 +50,25 @@ static inline bool blk_rq_dma_map_coalesce(struct dma_i= ova_state *state) * @state: DMA IOVA state * @mapped_len: number of bytes to unmap * @is_p2p: true if mapped with PCI_P2PDMA_MAP_BUS_ADDR + * @is_mmio: true if mapped with PCI_P2PDMA_MAP_THRU_HOST_BRIDGE * * Returns %false if the callers need to manually unmap every DMA segment * mapped using @iter or %true if no work is left to be done. */ static inline bool blk_dma_unmap(struct request *req, struct device *dma_d= ev, - struct dma_iova_state *state, size_t mapped_len, bool is_p2p) + struct dma_iova_state *state, size_t mapped_len, bool is_p2p, + bool is_mmio) { if (is_p2p) return true; =20 if (dma_use_iova(state)) { + unsigned int attrs =3D 0; + + if (is_mmio) + attrs =3D DMA_ATTR_MMIO; dma_iova_destroy(dma_dev, state, mapped_len, rq_dma_dir(req), - 0); + attrs); return true; } =20 @@ -72,7 +79,8 @@ static inline bool blk_rq_dma_unmap(struct request *req, = struct device *dma_dev, struct dma_iova_state *state, size_t mapped_len) { return blk_dma_unmap(req, dma_dev, state, mapped_len, - req->cmd_flags & REQ_P2PDMA); + req->cmd_flags & REQ_P2PDMA, + req->cmd_flags & REQ_MMIO); } =20 #endif /* BLK_MQ_DMA_H */ diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h index 8e8d1cc8b06c..9affa3b2d047 100644 --- a/include/linux/blk_types.h +++ b/include/linux/blk_types.h @@ -382,6 +382,7 @@ enum req_flag_bits { __REQ_FS_PRIVATE, /* for file system (submitter) use */ __REQ_ATOMIC, /* for atomic write operations */ __REQ_P2PDMA, /* contains P2P DMA pages */ + __REQ_MMIO, /* contains MMIO memory */ /* * Command specific flags, keep last: */ @@ -415,6 +416,7 @@ enum req_flag_bits { #define REQ_FS_PRIVATE (__force blk_opf_t)(1ULL << __REQ_FS_PRIVATE) #define REQ_ATOMIC (__force blk_opf_t)(1ULL << __REQ_ATOMIC) #define REQ_P2PDMA (__force blk_opf_t)(1ULL << __REQ_P2PDMA) +#define REQ_MMIO (__force blk_opf_t)(1ULL << __REQ_MMIO) =20 #define REQ_NOUNMAP (__force blk_opf_t)(1ULL << __REQ_NOUNMAP) =20 --=20 2.51.0