From nobody Wed Dec 3 03:12:02 2025 Received: from mail-wr1-f46.google.com (mail-wr1-f46.google.com [209.85.221.46]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A883228688C for ; Sun, 23 Nov 2025 22:51:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.46 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1763938315; cv=none; b=iEhXOyybnHurA7rU5jJFW+dzISpdybBzLu28mwHWrz2jJApmbUeKINhbxpD2aAXORVXDMc+KS+V6NQAS74EdURU5ZIJivr4tiSDX51Rv1vMFX7LggkyrOwjHgWtDrLjkuAXH2epc5r11DqCfCz0YJfo1DQJcqG/juuuy1acmfiA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1763938315; c=relaxed/simple; bh=069Y8B4LQyZ9577N4Tpt4w72wqG515Cr2xlW9Y7a6Cg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=KTL3WhVQDAHnjQ21TFN8cQX9re4AcdljxKha1cTdw1J7KzA4TO20g5w8NkXETHHPjJwA7wh2tHCQFhCafbu9PE9bmvoAQgwbqYDikOFK134JB8Q54yIvfe3mh+5e2v3cqz+wnHnJbkI8Vd1xqnx5I12MnleBRUwbtIGqz3NZ72c= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=WSVzQSuZ; arc=none smtp.client-ip=209.85.221.46 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="WSVzQSuZ" Received: by mail-wr1-f46.google.com with SMTP id ffacd0b85a97d-429c8632fcbso2222067f8f.1 for ; Sun, 23 Nov 2025 14:51:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1763938311; x=1764543111; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=vtyRM3SiK93qfGEm4pAX9GDP/9MNGPDztBsVCrDNvOw=; b=WSVzQSuZFASv9x+Ew3i3SIjYdBEW2G1mYB2kSAVZh0ZeZdgeTgK2oWLXdH0EaRSdBy U1s5++Nt0Ly00qErV8mdmfFI7UlvgHaEZt9yfxrFZwzzkQrlBMzgpkPhMEZ9rUEnJ6P/ OYw70cxs9KcAusMiZ/MZESMKr/itGclAM6mc6PLUEyo8dVxB3HRMiLxUdHz5zssFcfk9 ik26ZNn/tOoetCq0oRIdS0yE2k8vHJKbtHKbeyzuDDKchgh3EIpLgwTCX2a7pE+MELfv 6vkfp9MlmkAmSmutLvMaamjpNdxuliQB2iY1yXubcDbU/lHrMs3/3DPq1B0fWR0OmnQ5 WTDw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1763938311; x=1764543111; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=vtyRM3SiK93qfGEm4pAX9GDP/9MNGPDztBsVCrDNvOw=; b=hUsCkSr4jaFKKrEefwyyKdXx1GkuLsY8e+8U4rEdnnCIiRR185DVPjBd5/E6BEEYka gL/v4b8FSPjeO9aqvcHIkmNfPDpUg5cBmr4E5G6O3wTaVuyf1qN68SGdQd8g2TrZPeZX g5TbR90XJHm2Syj0CpuYI5PB9JL3p+qWsz0Q6hr/ZtNh+FcRQhpb59FkkoYh+KcVVK7M 30B5EV3i8UR262j1XKkCmIM/EotFRtXZYHLbFDDU9Jd5DiLsXj96XKmJXXD2qKytLQZY g/v/l5FS2/Z3JAMRrQi92qTs5Wm5lfJZ4kWItWIvWihJ70iXHtEzGV7eGOv+4uezWn6l idTA== X-Forwarded-Encrypted: i=1; AJvYcCXx1KjXaxt7hMYOvT+c5tVpbrWzM+QmZky1JZbIa7mmcXe5YcEdRsdd+XNett7KSZEWHsMOn+zCcu8edMg=@vger.kernel.org X-Gm-Message-State: AOJu0YyVmyTvt1FRPXHjDnz7OoauVpzLjXqIK6TMuJB5N9pJhgXYpwq/ 8rOoaxZyYc9uzSCnEc8h4zAEIY3MpaRiH0E8WXlv0gI++7z8V6tqAi/l X-Gm-Gg: ASbGnctlSdE02tD5XwVkzbGXuBJ54vMRi6X2nHW4lFS1GmqLV4gMhudH+e+52vMKK5W 8MLA3vbKxTPUyij3bat5SBuh31kqNNqFnQMc858SDZdpEWWhCFxiLN74fnS8TR+eRedK0yug1wF NHQsgoe6NkubJB1vHtiMU9gbZ25mNF8QyfYD5CBQKJYFgyKLehdueEWthQI4P8KrDnhNLdV93vv HdLn/IWSU0e4ryGssPhwnTPwAsWs0q1eP/gDZ/Xxdjq5Tcjal+8EoZyBXSrHRRkxLwhgTcdz00j ns0t4fYE7cNJQ7WjFIHdqjWYfa5GZhvuzHU9NCAuAlfS8ZkIhQs818pIBoan7d17ePl7FTLVbbc wN2l4XuFgjFk99c8eVg11/9/hLC1pswM+NIrjmKyqOLQBnzN6p7b8btWdoPTAmFyqDitCNJW8qH OFdhrV9KdMUTWZyw== X-Google-Smtp-Source: AGHT+IGfOL+4KpbQvV0qNRfo8OovcJ37mYapE/bWdNNKsAObOGKg7LEW9kMhWjZqfvqDPLpynO+F1g== X-Received: by 2002:a05:6000:3102:b0:429:d40e:fa40 with SMTP id ffacd0b85a97d-42cc1d0cab6mr8756848f8f.45.1763938310890; Sun, 23 Nov 2025 14:51:50 -0800 (PST) Received: from 127.mynet ([2a01:4b00:bd21:4f00:7cc6:d3ca:494:116c]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-42cb7fb9190sm24849064f8f.33.2025.11.23.14.51.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 23 Nov 2025 14:51:50 -0800 (PST) From: Pavel Begunkov To: linux-block@vger.kernel.org, io-uring@vger.kernel.org Cc: Vishal Verma , tushar.gohad@intel.com, Keith Busch , Jens Axboe , Christoph Hellwig , Sagi Grimberg , Alexander Viro , Christian Brauner , Andrew Morton , Sumit Semwal , =?UTF-8?q?Christian=20K=C3=B6nig?= , Pavel Begunkov , linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org, linux-fsdevel@vger.kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org Subject: [RFC v2 05/11] block: add infra to handle dmabuf tokens Date: Sun, 23 Nov 2025 22:51:25 +0000 Message-ID: <51cddd97b31d80ec8842a88b9f3c9881419e8a7b.1763725387.git.asml.silence@gmail.com> X-Mailer: git-send-email 2.52.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add blk-mq infrastructure to handle dmabuf tokens. There are two main objects. The first is struct blk_mq_dma_token, which is an extension of struct dma_token and passed in an iterator. The second is struct blk_mq_dma_map, which keeps the actual mapping and unlike the token, can be ejected (e.g. by move_notify) and recreated. The token keeps an rcu protected pointer to the mapping, so when it resolves a token into a mapping to pass it to a request, it'll do an rcu protected lookup and get a percpu reference to the mapping. If there is no current mapping attached to a token, it'll need to be created by calling the driver (e.g. nvme) via a new callback. It requires waiting, thefore can't be done for nowait requests and couldn't happen deeper in the stack, e.g. during nvme request submission. The structure split is needed because move_notify can request to invalidate the dma mapping at any moment, and we need a way to concurrently remove it and wait for the inflight requests using the previous mapping to complete. Signed-off-by: Pavel Begunkov --- block/Makefile | 1 + block/bdev.c | 14 ++ block/blk-mq-dma-token.c | 236 +++++++++++++++++++++++++++++++ block/blk-mq.c | 20 +++ block/fops.c | 1 + include/linux/blk-mq-dma-token.h | 60 ++++++++ include/linux/blk-mq.h | 21 +++ include/linux/blkdev.h | 3 + 8 files changed, 356 insertions(+) create mode 100644 block/blk-mq-dma-token.c create mode 100644 include/linux/blk-mq-dma-token.h diff --git a/block/Makefile b/block/Makefile index c65f4da93702..0190e5aa9f00 100644 --- a/block/Makefile +++ b/block/Makefile @@ -36,3 +36,4 @@ obj-$(CONFIG_BLK_INLINE_ENCRYPTION) +=3D blk-crypto.o blk= -crypto-profile.o \ blk-crypto-sysfs.o obj-$(CONFIG_BLK_INLINE_ENCRYPTION_FALLBACK) +=3D blk-crypto-fallback.o obj-$(CONFIG_BLOCK_HOLDER_DEPRECATED) +=3D holder.o +obj-$(CONFIG_DMA_SHARED_BUFFER) +=3D blk-mq-dma-token.o diff --git a/block/bdev.c b/block/bdev.c index 810707cca970..da89d20f33f3 100644 --- a/block/bdev.c +++ b/block/bdev.c @@ -28,6 +28,7 @@ #include #include #include +#include #include "../fs/internal.h" #include "blk.h" =20 @@ -61,6 +62,19 @@ struct block_device *file_bdev(struct file *bdev_file) } EXPORT_SYMBOL(file_bdev); =20 +struct dma_token *blkdev_dma_map(struct file *file, + struct dma_token_params *params) +{ + struct request_queue *q =3D bdev_get_queue(file_bdev(file)); + + if (!(file->f_flags & O_DIRECT)) + return ERR_PTR(-EINVAL); + if (!q->mq_ops) + return ERR_PTR(-EINVAL); + + return blk_mq_dma_map(q, params); +} + static void bdev_write_inode(struct block_device *bdev) { struct inode *inode =3D BD_INODE(bdev); diff --git a/block/blk-mq-dma-token.c b/block/blk-mq-dma-token.c new file mode 100644 index 000000000000..cd62c4d09422 --- /dev/null +++ b/block/blk-mq-dma-token.c @@ -0,0 +1,236 @@ +#include +#include + +struct blk_mq_dma_fence { + struct dma_fence base; + spinlock_t lock; +}; + +static const char *blk_mq_fence_drv_name(struct dma_fence *fence) +{ + return "blk-mq"; +} + +const struct dma_fence_ops blk_mq_dma_fence_ops =3D { + .get_driver_name =3D blk_mq_fence_drv_name, + .get_timeline_name =3D blk_mq_fence_drv_name, +}; + +static void blk_mq_dma_token_free(struct blk_mq_dma_token *token) +{ + token->q->mq_ops->clean_dma_token(token->q, token); + dma_buf_put(token->dmabuf); + kfree(token); +} + +static inline void blk_mq_dma_token_put(struct blk_mq_dma_token *token) +{ + if (refcount_dec_and_test(&token->refs)) + blk_mq_dma_token_free(token); +} + +static void blk_mq_dma_mapping_free(struct blk_mq_dma_map *map) +{ + struct blk_mq_dma_token *token =3D map->token; + + if (map->sgt) + token->q->mq_ops->dma_unmap(token->q, map); + + dma_fence_put(&map->fence->base); + percpu_ref_exit(&map->refs); + kfree(map); + blk_mq_dma_token_put(token); +} + +static void blk_mq_dma_map_work_free(struct work_struct *work) +{ + struct blk_mq_dma_map *map =3D container_of(work, struct blk_mq_dma_map, + free_work); + + dma_fence_signal(&map->fence->base); + blk_mq_dma_mapping_free(map); +} + +static void blk_mq_dma_map_refs_free(struct percpu_ref *ref) +{ + struct blk_mq_dma_map *map =3D container_of(ref, struct blk_mq_dma_map, r= efs); + + INIT_WORK(&map->free_work, blk_mq_dma_map_work_free); + queue_work(system_wq, &map->free_work); +} + +static struct blk_mq_dma_map *blk_mq_alloc_dma_mapping(struct blk_mq_dma_t= oken *token) +{ + struct blk_mq_dma_fence *fence =3D NULL; + struct blk_mq_dma_map *map; + int ret =3D -ENOMEM; + + map =3D kzalloc(sizeof(*map), GFP_KERNEL); + if (!map) + return ERR_PTR(-ENOMEM); + + fence =3D kzalloc(sizeof(*fence), GFP_KERNEL); + if (!fence) + goto err; + + ret =3D percpu_ref_init(&map->refs, blk_mq_dma_map_refs_free, 0, + GFP_KERNEL); + if (ret) + goto err; + + dma_fence_init(&fence->base, &blk_mq_dma_fence_ops, &fence->lock, + token->fence_ctx, atomic_inc_return(&token->fence_seq)); + spin_lock_init(&fence->lock); + map->fence =3D fence; + map->token =3D token; + refcount_inc(&token->refs); + return map; +err: + kfree(map); + kfree(fence); + return ERR_PTR(ret); +} + +static inline +struct blk_mq_dma_map *blk_mq_get_token_map(struct blk_mq_dma_token *token) +{ + struct blk_mq_dma_map *map; + + guard(rcu)(); + + map =3D rcu_dereference(token->map); + if (unlikely(!map || !percpu_ref_tryget_live_rcu(&map->refs))) + return NULL; + return map; +} + +static struct blk_mq_dma_map * +blk_mq_create_dma_map(struct blk_mq_dma_token *token) +{ + struct dma_buf *dmabuf =3D token->dmabuf; + struct blk_mq_dma_map *map; + long ret; + + guard(mutex)(&token->mapping_lock); + + map =3D blk_mq_get_token_map(token); + if (map) + return map; + + map =3D blk_mq_alloc_dma_mapping(token); + if (IS_ERR(map)) + return NULL; + + dma_resv_lock(dmabuf->resv, NULL); + ret =3D dma_resv_wait_timeout(dmabuf->resv, DMA_RESV_USAGE_BOOKKEEP, + true, MAX_SCHEDULE_TIMEOUT); + ret =3D ret ? ret : -ETIME; + if (ret > 0) + ret =3D token->q->mq_ops->dma_map(token->q, map); + dma_resv_unlock(dmabuf->resv); + + if (ret) + return ERR_PTR(ret); + + percpu_ref_get(&map->refs); + rcu_assign_pointer(token->map, map); + return map; +} + +static void blk_mq_dma_map_remove(struct blk_mq_dma_token *token) +{ + struct dma_buf *dmabuf =3D token->dmabuf; + struct blk_mq_dma_map *map; + int ret; + + dma_resv_assert_held(dmabuf->resv); + + ret =3D dma_resv_reserve_fences(dmabuf->resv, 1); + if (WARN_ON_ONCE(ret)) + return; + + map =3D rcu_dereference_protected(token->map, + dma_resv_held(dmabuf->resv)); + if (!map) + return; + rcu_assign_pointer(token->map, NULL); + + dma_resv_add_fence(dmabuf->resv, &map->fence->base, + DMA_RESV_USAGE_KERNEL); + percpu_ref_kill(&map->refs); +} + +blk_status_t blk_rq_assign_dma_map(struct request *rq, + struct blk_mq_dma_token *token) +{ + struct blk_mq_dma_map *map; + + map =3D blk_mq_get_token_map(token); + if (map) + goto complete; + + if (rq->cmd_flags & REQ_NOWAIT) + return BLK_STS_AGAIN; + + map =3D blk_mq_create_dma_map(token); + if (IS_ERR(map)) + return BLK_STS_RESOURCE; +complete: + rq->dma_map =3D map; + return BLK_STS_OK; +} + +void blk_mq_dma_map_move_notify(struct blk_mq_dma_token *token) +{ + blk_mq_dma_map_remove(token); +} + +static void blk_mq_release_dma_mapping(struct dma_token *base_token) +{ + struct blk_mq_dma_token *token =3D dma_token_to_blk_mq(base_token); + struct dma_buf *dmabuf =3D token->dmabuf; + + dma_resv_lock(dmabuf->resv, NULL); + blk_mq_dma_map_remove(token); + dma_resv_unlock(dmabuf->resv); + + blk_mq_dma_token_put(token); +} + +struct dma_token *blk_mq_dma_map(struct request_queue *q, + struct dma_token_params *params) +{ + struct dma_buf *dmabuf =3D params->dmabuf; + struct blk_mq_dma_token *token; + int ret; + + if (!q->mq_ops->dma_map || !q->mq_ops->dma_unmap || + !q->mq_ops->init_dma_token || !q->mq_ops->clean_dma_token) + return ERR_PTR(-EINVAL); + + token =3D kzalloc(sizeof(*token), GFP_KERNEL); + if (!token) + return ERR_PTR(-ENOMEM); + + get_dma_buf(dmabuf); + token->fence_ctx =3D dma_fence_context_alloc(1); + token->dmabuf =3D dmabuf; + token->dir =3D params->dir; + token->base.release =3D blk_mq_release_dma_mapping; + token->q =3D q; + refcount_set(&token->refs, 1); + mutex_init(&token->mapping_lock); + + if (!blk_get_queue(q)) { + kfree(token); + return ERR_PTR(-EFAULT); + } + + ret =3D token->q->mq_ops->init_dma_token(token->q, token); + if (ret) { + kfree(token); + blk_put_queue(q); + return ERR_PTR(ret); + } + return &token->base; +} diff --git a/block/blk-mq.c b/block/blk-mq.c index f2650c97a75e..1ff3a7e3191b 100644 --- a/block/blk-mq.c +++ b/block/blk-mq.c @@ -29,6 +29,7 @@ #include #include #include +#include =20 #include =20 @@ -439,6 +440,7 @@ static struct request *blk_mq_rq_ctx_init(struct blk_mq= _alloc_data *data, rq->nr_integrity_segments =3D 0; rq->end_io =3D NULL; rq->end_io_data =3D NULL; + rq->dma_map =3D NULL; =20 blk_crypto_rq_set_defaults(rq); INIT_LIST_HEAD(&rq->queuelist); @@ -794,6 +796,7 @@ static void __blk_mq_free_request(struct request *rq) blk_pm_mark_last_busy(rq); rq->mq_hctx =3D NULL; =20 + blk_rq_drop_dma_map(rq); if (rq->tag !=3D BLK_MQ_NO_TAG) { blk_mq_dec_active_requests(hctx); blk_mq_put_tag(hctx->tags, ctx, rq->tag); @@ -3214,6 +3217,23 @@ void blk_mq_submit_bio(struct bio *bio) =20 blk_mq_bio_to_request(rq, bio, nr_segs); =20 + if (bio_flagged(bio, BIO_DMA_TOKEN)) { + struct blk_mq_dma_token *token; + blk_status_t ret; + + token =3D dma_token_to_blk_mq(bio->dma_token); + ret =3D blk_rq_assign_dma_map(rq, token); + if (ret) { + if (ret =3D=3D BLK_STS_AGAIN) { + bio_wouldblock_error(bio); + } else { + bio->bi_status =3D BLK_STS_RESOURCE; + bio_endio(bio); + } + goto queue_exit; + } + } + ret =3D blk_crypto_rq_get_keyslot(rq); if (ret !=3D BLK_STS_OK) { bio->bi_status =3D ret; diff --git a/block/fops.c b/block/fops.c index 41f8795874a9..ac52fe1a4b8d 100644 --- a/block/fops.c +++ b/block/fops.c @@ -973,6 +973,7 @@ const struct file_operations def_blk_fops =3D { .fallocate =3D blkdev_fallocate, .uring_cmd =3D blkdev_uring_cmd, .fop_flags =3D FOP_BUFFER_RASYNC, + .dma_map =3D blkdev_dma_map, }; =20 static __init int blkdev_init(void) diff --git a/include/linux/blk-mq-dma-token.h b/include/linux/blk-mq-dma-to= ken.h new file mode 100644 index 000000000000..4a8d84addc06 --- /dev/null +++ b/include/linux/blk-mq-dma-token.h @@ -0,0 +1,60 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +#ifndef BLK_MQ_DMA_TOKEN_H +#define BLK_MQ_DMA_TOKEN_H + +#include +#include +#include + +struct blk_mq_dma_token; +struct blk_mq_dma_fence; + +struct blk_mq_dma_map { + void *private; + + struct percpu_ref refs; + struct sg_table *sgt; + struct blk_mq_dma_token *token; + struct blk_mq_dma_fence *fence; + struct work_struct free_work; +}; + +struct blk_mq_dma_token { + struct dma_token base; + enum dma_data_direction dir; + + void *private; + + struct dma_buf *dmabuf; + struct blk_mq_dma_map __rcu *map; + struct request_queue *q; + + struct mutex mapping_lock; + refcount_t refs; + + atomic_t fence_seq; + u64 fence_ctx; +}; + +static inline +struct blk_mq_dma_token *dma_token_to_blk_mq(struct dma_token *token) +{ + return container_of(token, struct blk_mq_dma_token, base); +} + +blk_status_t blk_rq_assign_dma_map(struct request *req, + struct blk_mq_dma_token *token); + +static inline void blk_rq_drop_dma_map(struct request *rq) +{ + if (rq->dma_map) { + percpu_ref_put(&rq->dma_map->refs); + rq->dma_map =3D NULL; + } +} + +void blk_mq_dma_map_move_notify(struct blk_mq_dma_token *token); +struct dma_token *blk_mq_dma_map(struct request_queue *q, + struct dma_token_params *params); + +#endif /* BLK_MQ_DMA_TOKEN_H */ diff --git a/include/linux/blk-mq.h b/include/linux/blk-mq.h index b54506b3b76d..4745d1e183f2 100644 --- a/include/linux/blk-mq.h +++ b/include/linux/blk-mq.h @@ -94,6 +94,9 @@ enum mq_rq_state { MQ_RQ_COMPLETE =3D 2, }; =20 +struct blk_mq_dma_map; +struct blk_mq_dma_token; + /* * Try to put the fields that are referenced together in the same cachelin= e. * @@ -170,6 +173,8 @@ struct request { =20 unsigned long deadline; =20 + struct blk_mq_dma_map *dma_map; + /* * The hash is used inside the scheduler, and killed once the * request reaches the dispatch list. The ipi_list is only used @@ -675,6 +680,21 @@ struct blk_mq_ops { */ void (*map_queues)(struct blk_mq_tag_set *set); =20 + /** + * @map_dmabuf: Allows drivers to pre-map a dmabuf. The resulting driver + * specific mapping will be wrapped into dma_token and passed to the + * read / write path in an iterator. + */ + int (*dma_map)(struct request_queue *q, struct blk_mq_dma_map *); + void (*dma_unmap)(struct request_queue *q, struct blk_mq_dma_map *); + int (*init_dma_token)(struct request_queue *q, + struct blk_mq_dma_token *token); + void (*clean_dma_token)(struct request_queue *q, + struct blk_mq_dma_token *token); + + struct dma_buf_attachment *(*dma_attach)(struct request_queue *q, + struct dma_token_params *params); + #ifdef CONFIG_BLK_DEBUG_FS /** * @show_rq: Used by the debugfs implementation to show driver-specific @@ -946,6 +966,7 @@ void blk_mq_tagset_busy_iter(struct blk_mq_tag_set *tag= set, void blk_mq_tagset_wait_completed_request(struct blk_mq_tag_set *tagset); void blk_mq_freeze_queue_nomemsave(struct request_queue *q); void blk_mq_unfreeze_queue_nomemrestore(struct request_queue *q); + static inline unsigned int __must_check blk_mq_freeze_queue(struct request_queue *q) { diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index cb4ba09959ee..dec75348f8dc 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -1777,6 +1777,9 @@ struct block_device *file_bdev(struct file *bdev_file= ); bool disk_live(struct gendisk *disk); unsigned int block_size(struct block_device *bdev); =20 +struct dma_token *blkdev_dma_map(struct file *file, + struct dma_token_params *params); + #ifdef CONFIG_BLOCK void invalidate_bdev(struct block_device *bdev); int sync_blockdev(struct block_device *bdev); --=20 2.52.0