From nobody Tue Dec 2 01:44:00 2025 Received: from mail-wm1-f49.google.com (mail-wm1-f49.google.com [209.85.128.49]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4B6152853F7 for ; Sun, 23 Nov 2025 22:51:50 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.49 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1763938312; cv=none; b=mG7MyOI/waZVrF6+IB/nN0JIgQ0Z+mvCo1FvSwEQ+d/ABW4qrEnF6pyIiXVt+Lurauf1sZwJBcExghsX8sA/pFmhHSTmrt1+DkznAstXovftg8I+SmTR3W7pXAg8FGh5OxCfgX3RZf+yRc84SVc+aTaCp2RVzg8klFmXhch8oP4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1763938312; c=relaxed/simple; bh=XGx1f8qezWKlKgKLLZ301GCSMIBZ1JsSW3EFwWISAzM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=pcEmqKWYT9Mvei0JDerudVFzP9PMLCEY+//FaQLV31nMOtyfmyBqtyqs/pNFqFQs9wzrOK+eebbQ+k2hq6FpmWd2OJp3EWF1xcFR4d79H/Yh7j2ACj3DpBTsGPUfcYVSUuqfOVDYfQ6WzKpLSKsCq9oJ2ORfZP9oyFY2JGvAduA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=Usl72E/4; arc=none smtp.client-ip=209.85.128.49 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="Usl72E/4" Received: by mail-wm1-f49.google.com with SMTP id 5b1f17b1804b1-47118259fd8so31532475e9.3 for ; Sun, 23 Nov 2025 14:51:50 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1763938309; x=1764543109; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=zxuKr9/wD0hv1o+9Q16IeSgRkNQrWSlYyRysfxG3AMc=; b=Usl72E/4lDVylaOAdrgBenfNmYwPULhoCn7bcUbX9lMyLAwFpLkhpQgSbSxfxpz0Fw U7Zx3kltDeQeXDAJ6unFxL+Aivuml7u+pH+YdsCp2YTEGuVWvtNC1xcCd7uMu3Q/Zi1H So+w5R3mk2TxH/02U99gTzJNVSGSWSO0Mil8Z3yTwlYMMX7WRi1uK8f5XqOsNiqkT51C e625YD7BMenXbT1xQlh1jNZ+HEx5hCbLGVP0bL2vVoqbYMLH7zX/HaZb8J8gDdVHI6ik M7pyw1B520SlVXnVjbRibWutqDawrzaDB1cUS2V6YdZJ6TaUrsqPTLb/UdqGpThc+NeJ b6gQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1763938309; x=1764543109; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=zxuKr9/wD0hv1o+9Q16IeSgRkNQrWSlYyRysfxG3AMc=; b=uw9M1pW+KMOyzeMlXaFkwbeCgXjxF/q7jSEd9fagr+nutlm+vo017ayjySvUxlqmo3 dmeG3kt90nZM4ZebP2mnRZuF2K6v/qM/XESh0HRm/N2CNmREsbwHRYFAjH1yr7ooSx28 mkE8Y/qmQ3Omn1ZDJTDTSlWeweoClzBwTqlVgc/5beBPmQ1eEhpjrRfeW2Rs708f1lQX ot8jouol4EAo7XHGj4pnCLvBFZ8asosUf1hRzFdSd/vpWqS/E+nDxgeagMrqnlbjhZIm 8fEMz4I9OIds2UrOYMVd+RP83nCxzjse73X3v9A2JY+3FQnGWztdD6yzAdJkZ4ciGBGe 1e3Q== X-Forwarded-Encrypted: i=1; AJvYcCV7rUuyJ8KK+HXy7HVIeBbZMvLr7iiP34A1z70/qrP5qEpOejr6Yaxo0GGUg6sz1EiWQBa9FKU75zXlO1A=@vger.kernel.org X-Gm-Message-State: AOJu0YxSSoCyy8M4q0wDUho5sR14Y2DwyZfNPSEH6alN2CUoZBAYID6K etUIlPrxZ1reF9aL2Cz7w3QcuBSB9MblYFUOS79qfnU6xzX/Lasw87rW X-Gm-Gg: ASbGncsXV/+02E5TSnEAcV7sXn8Agumj++E2pHpgkhOY71TNkUsVSDrC4IKKixnRVVN reFDhoidM7vm5cp+/smtZlgti+Kjjp63Aeg3BFPkTInnT4RWeLIOIVzuN2DmcR/oSHkfHIun3Kn JwE1qKNEm6KpFCb01hJwh5RxsZoJ2f2MUeJYvntk4J8/4MdyLkNv5JiQ/AtCoGL0fIk+7D5XFx7 JCcRGzStmVjZk/zpoDk+3lfaCBxUOvSfSLscnVJIo7mLtYhSQ9RF55bLj7n7qgN9ZJNXrPqAyaN 2qPEgvS0l5RljM95c/ah9bqD/rEo8ZGVVs/2mz4gJOhx0Ue6eFgi0KlEg3YS4SvSKOb1dhB0uyR MPWnl+wfWFbXGb3KuS9cCNwBC4AkRbPSVszjDFrpEjk4+ZfZG8gvgnjlJ4l1viXhBtt9+l9vRSp uKH63/QclbCFIa2i2IXUccnL11 X-Google-Smtp-Source: AGHT+IF9Tqh/ORJZEo3R5Xg7ZJWy7sWzF8MjqLc9q+tvrWwYorpVbwyCo218uq/Fj/Pad8uOgoxn1g== X-Received: by 2002:a05:600c:840f:b0:477:7479:f081 with SMTP id 5b1f17b1804b1-477c0181443mr112383465e9.12.1763938308511; Sun, 23 Nov 2025 14:51:48 -0800 (PST) Received: from 127.mynet ([2a01:4b00:bd21:4f00:7cc6:d3ca:494:116c]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-42cb7fb9190sm24849064f8f.33.2025.11.23.14.51.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 23 Nov 2025 14:51:47 -0800 (PST) From: Pavel Begunkov To: linux-block@vger.kernel.org, io-uring@vger.kernel.org Cc: Vishal Verma , tushar.gohad@intel.com, Keith Busch , Jens Axboe , Christoph Hellwig , Sagi Grimberg , Alexander Viro , Christian Brauner , Andrew Morton , Sumit Semwal , =?UTF-8?q?Christian=20K=C3=B6nig?= , Pavel Begunkov , linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org, linux-fsdevel@vger.kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org Subject: [RFC v2 04/11] block: introduce dma token backed bio type Date: Sun, 23 Nov 2025 22:51:24 +0000 Message-ID: <12530de6d1907afb44be3e76e7668b935f1fd441.1763725387.git.asml.silence@gmail.com> X-Mailer: git-send-email 2.52.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Premapped buffers don't require a generic bio_vec since these have already been dma mapped. Repurpose the bi_io_vec space for the dma token as they are mutually exclusive, and provide setup to support dma tokens. In order to use this, a driver must implement the dma_map blk-mq op, in which case it must be aware that any given bio may be using a dma_tag instead of a bio_vec. Suggested-by: Keith Busch Signed-off-by: Pavel Begunkov --- block/bio.c | 21 +++++++++++++++++++++ block/blk-merge.c | 23 +++++++++++++++++++++++ block/blk.h | 3 ++- block/fops.c | 2 ++ include/linux/bio.h | 19 ++++++++++++++++--- include/linux/blk_types.h | 8 +++++++- 6 files changed, 71 insertions(+), 5 deletions(-) diff --git a/block/bio.c b/block/bio.c index 7b13bdf72de0..8793f1ee559d 100644 --- a/block/bio.c +++ b/block/bio.c @@ -843,6 +843,11 @@ static int __bio_clone(struct bio *bio, struct bio *bi= o_src, gfp_t gfp) bio_clone_blkg_association(bio, bio_src); } =20 + if (bio_flagged(bio_src, BIO_DMA_TOKEN)) { + bio->dma_token =3D bio_src->dma_token; + bio_set_flag(bio, BIO_DMA_TOKEN); + } + if (bio_crypt_clone(bio, bio_src, gfp) < 0) return -ENOMEM; if (bio_integrity(bio_src) && @@ -1167,6 +1172,18 @@ void bio_iov_bvec_set(struct bio *bio, const struct = iov_iter *iter) bio_set_flag(bio, BIO_CLONED); } =20 +void bio_iov_dma_token_set(struct bio *bio, struct iov_iter *iter) +{ + WARN_ON_ONCE(bio->bi_max_vecs); + + bio->dma_token =3D iter->dma_token; + bio->bi_vcnt =3D 0; + bio->bi_iter.bi_bvec_done =3D iter->iov_offset; + bio->bi_iter.bi_size =3D iov_iter_count(iter); + bio->bi_opf |=3D REQ_NOMERGE; + bio_set_flag(bio, BIO_DMA_TOKEN); +} + static unsigned int get_contig_folio_len(unsigned int *num_pages, struct page **pages, unsigned int i, struct folio *folio, size_t left, @@ -1349,6 +1366,10 @@ int bio_iov_iter_get_pages(struct bio *bio, struct i= ov_iter *iter, bio_iov_bvec_set(bio, iter); iov_iter_advance(iter, bio->bi_iter.bi_size); return 0; + } else if (iov_iter_is_dma_token(iter)) { + bio_iov_dma_token_set(bio, iter); + iov_iter_advance(iter, bio->bi_iter.bi_size); + return 0; } =20 if (iov_iter_extract_will_pin(iter)) diff --git a/block/blk-merge.c b/block/blk-merge.c index d3115d7469df..c02a5f9c99e6 100644 --- a/block/blk-merge.c +++ b/block/blk-merge.c @@ -328,6 +328,29 @@ int bio_split_io_at(struct bio *bio, const struct queu= e_limits *lim, unsigned nsegs =3D 0, bytes =3D 0, gaps =3D 0; struct bvec_iter iter; =20 + if (bio_flagged(bio, BIO_DMA_TOKEN)) { + int offset =3D offset_in_page(bio->bi_iter.bi_bvec_done); + + nsegs =3D ALIGN(bio->bi_iter.bi_size + offset, PAGE_SIZE); + nsegs >>=3D PAGE_SHIFT; + + if (offset & lim->dma_alignment || bytes & len_align_mask) + return -EINVAL; + + if (bio->bi_iter.bi_size > max_bytes) { + bytes =3D max_bytes; + nsegs =3D (bytes + offset) >> PAGE_SHIFT; + goto split; + } else if (nsegs > lim->max_segments) { + nsegs =3D lim->max_segments; + bytes =3D PAGE_SIZE * nsegs - offset; + goto split; + } + + *segs =3D nsegs; + return 0; + } + bio_for_each_bvec(bv, bio, iter) { if (bv.bv_offset & lim->dma_alignment || bv.bv_len & len_align_mask) diff --git a/block/blk.h b/block/blk.h index e4c433f62dfc..2c72f2630faf 100644 --- a/block/blk.h +++ b/block/blk.h @@ -398,7 +398,8 @@ static inline struct bio *__bio_split_to_limits(struct = bio *bio, switch (bio_op(bio)) { case REQ_OP_READ: case REQ_OP_WRITE: - if (bio_may_need_split(bio, lim)) + if (bio_may_need_split(bio, lim) || + bio_flagged(bio, BIO_DMA_TOKEN)) return bio_split_rw(bio, lim, nr_segs); *nr_segs =3D 1; return bio; diff --git a/block/fops.c b/block/fops.c index 5e3db9fead77..41f8795874a9 100644 --- a/block/fops.c +++ b/block/fops.c @@ -354,6 +354,8 @@ static ssize_t __blkdev_direct_IO_async(struct kiocb *i= ocb, * bio_iov_iter_get_pages() and set the bvec directly. */ bio_iov_bvec_set(bio, iter); + } else if (iov_iter_is_dma_token(iter)) { + bio_iov_dma_token_set(bio, iter); } else { ret =3D blkdev_iov_iter_get_pages(bio, iter, bdev); if (unlikely(ret)) diff --git a/include/linux/bio.h b/include/linux/bio.h index c75a9b3672aa..f83342640e71 100644 --- a/include/linux/bio.h +++ b/include/linux/bio.h @@ -108,16 +108,26 @@ static inline bool bio_next_segment(const struct bio = *bio, #define bio_for_each_segment_all(bvl, bio, iter) \ for (bvl =3D bvec_init_iter_all(&iter); bio_next_segment((bio), &iter); ) =20 +static inline void bio_advance_iter_dma_token(struct bvec_iter *iter, + unsigned int bytes) +{ + iter->bi_bvec_done +=3D bytes; + iter->bi_size -=3D bytes; +} + static inline void bio_advance_iter(const struct bio *bio, struct bvec_iter *iter, unsigned int bytes) { iter->bi_sector +=3D bytes >> 9; =20 - if (bio_no_advance_iter(bio)) + if (bio_no_advance_iter(bio)) { iter->bi_size -=3D bytes; - else + } else if (bio_flagged(bio, BIO_DMA_TOKEN)) { + bio_advance_iter_dma_token(iter, bytes); + } else { bvec_iter_advance(bio->bi_io_vec, iter, bytes); /* TODO: It is reasonable to complete bio with error here. */ + } } =20 /* @bytes should be less or equal to bvec[i->bi_idx].bv_len */ @@ -129,6 +139,8 @@ static inline void bio_advance_iter_single(const struct= bio *bio, =20 if (bio_no_advance_iter(bio)) iter->bi_size -=3D bytes; + else if (bio_flagged(bio, BIO_DMA_TOKEN)) + bio_advance_iter_dma_token(iter, bytes); else bvec_iter_advance_single(bio->bi_io_vec, iter, bytes); } @@ -398,7 +410,7 @@ static inline void bio_wouldblock_error(struct bio *bio) */ static inline int bio_iov_vecs_to_alloc(struct iov_iter *iter, int max_seg= s) { - if (iov_iter_is_bvec(iter)) + if (iov_iter_is_bvec(iter) || iov_iter_is_dma_token(iter)) return 0; return iov_iter_npages(iter, max_segs); } @@ -452,6 +464,7 @@ int bio_iov_iter_get_pages(struct bio *bio, struct iov_= iter *iter, unsigned len_align_mask); =20 void bio_iov_bvec_set(struct bio *bio, const struct iov_iter *iter); +void bio_iov_dma_token_set(struct bio *bio, struct iov_iter *iter); void __bio_release_pages(struct bio *bio, bool mark_dirty); extern void bio_set_pages_dirty(struct bio *bio); extern void bio_check_pages_dirty(struct bio *bio); diff --git a/include/linux/blk_types.h b/include/linux/blk_types.h index cbbcb9051ec3..3bc7f89d4e66 100644 --- a/include/linux/blk_types.h +++ b/include/linux/blk_types.h @@ -275,7 +275,12 @@ struct bio { =20 atomic_t __bi_cnt; /* pin count */ =20 - struct bio_vec *bi_io_vec; /* the actual vec list */ + union { + struct bio_vec *bi_io_vec; /* the actual vec list */ + /* Driver specific dma map, present only with BIO_DMA_TOKEN */ + struct dma_token *dma_token; + }; + =20 struct bio_set *bi_pool; }; @@ -315,6 +320,7 @@ enum { BIO_REMAPPED, BIO_ZONE_WRITE_PLUGGING, /* bio handled through zone write plugging */ BIO_EMULATES_ZONE_APPEND, /* bio emulates a zone append operation */ + BIO_DMA_TOKEN, /* Using premmaped dma buffers */ BIO_FLAG_LAST }; =20 --=20 2.52.0