From nobody Tue Dec 2 01:42:32 2025 Received: from mail-wr1-f45.google.com (mail-wr1-f45.google.com [209.85.221.45]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7258928850B for ; Sun, 23 Nov 2025 22:51:55 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.45 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1763938318; cv=none; b=smV7Ix+OxKghMoZfLPv08Sk472gjf3XTRXD4t1u4glC+sHwYDk0FAMRVAcXMYZKqnDj3//apndMezVQiJEPvvQA2ySv2nFvraZnQS21rxdi6lGSg4zcP1ln0PdUcz2Zw/IRfaJvYHS6vlBlsmkpcx8bBig/U4ybC05Pl54bZXBg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1763938318; c=relaxed/simple; bh=yxSa78i6WGSB+rLN2wnYPcV3pbfkXhbApdZHHrVnd/4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=bXCbZrLOxJtzsuaplYgwBcEiJhedrxyLWiFZ1TXD5gu+qQLM45qhTIc4PcQHkHl7MDwz8VYod52wS8/avEoi0gpgbNlATW8GusCsO4kcYNI2pKq5jIw6LliMj0TKrEeD3ZMTXZMW8uY7ovxMBAHOHO6wyUYcsYAsvnmG/rqTQEw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=SWC3WtzW; arc=none smtp.client-ip=209.85.221.45 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="SWC3WtzW" Received: by mail-wr1-f45.google.com with SMTP id ffacd0b85a97d-42b2e9ac45aso2137531f8f.0 for ; Sun, 23 Nov 2025 14:51:55 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1763938314; x=1764543114; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=o6XYP6M4yTt1qmMNVTJGvjFxmCyRCFcZp21DmV1BJvo=; b=SWC3WtzWunmIoxQcpPcBFLPUYEJWym14GO7iRr1/3Ad36ZEWp8V202BrOiW3ZOAtA5 KF/ACbm/UiESvf5tIAV7eC+onWTD648+asz9J3Rkemi5EhydytFOTuKqqrfvQ83ejP5U Btu4HwACa5/mz0Bue4XOYVliW7GSFFcJh4oYeP+FNHk3rNqnrIk6gRNf7w/sgWtTk2E3 I2c18V+67kavlYo67p746Vre5fBSG5MgrMElJ44JJfzsV1/MDhsb60CZwIrd0ksvFdVL 2JUPt/jnoGcO9oi84wVxtL1RAgxVlVOK4WosWmrIh/zJE6wkdIgzhom/JhIed3BpmqAV +Thw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1763938314; x=1764543114; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=o6XYP6M4yTt1qmMNVTJGvjFxmCyRCFcZp21DmV1BJvo=; b=T434ark0PmT7hj8YIrGbtZQFun5Wu7Jn1+UXjcref3QXVZiSvpbf9YpQthSd1UBu2Z OhuAsY4hrmed2IavSHgLo+yKRJ/xT0N846VIj5tAzUxugfV5AUEXnL9roSAMp4YFtSuH Y0OwLU3+qGasMsV918OD/opsZMtKe7z5rB8W10o37qFBF5GU7kt7q7k6aKvtoZ0Tjdu0 XlGIp1ZH76N7TaPc+PbVa50XvPEwH1/is505egA9V9UghBMOJX7/JwEHsrlEsCafZYdg 24LJKJl0xKwkQNrF9H+k/ZU3bEa1zEwK64Ag/3f8kV/tlCyhq3f5zs9AmQhIza9k8NQr LIqg== X-Forwarded-Encrypted: i=1; AJvYcCX38Zrw3CjV2nIzuLIQ6DJ0hh6qh4UGY3iM2u9YLmzz/41+yv5qYa8bejMP05IKGpXtcAVqFG/Jk1Omq/U=@vger.kernel.org X-Gm-Message-State: AOJu0Yx7Sz3KKkiPLjmOsWeEeoJFJxpZ/NWpSGeYiI7w3cyhP2iwFK4p TM8GgXiqMWJ39qK/oRs8zy8z+CYc4eHbhhAQAha6xWInioFQ1PLdMjH9 X-Gm-Gg: ASbGnct/WiRZpVOwRw3ZSYR0QUYS/it4XS/b2hj4LEBfq5lAHLOSNCe7Wyeyseh39EY YOU4P4sEYArNZXMISOZPuiQNzlpMJ6WViEG0orsEEJaM2dBKtJlibIrRQul7bC33JvByRFftKlh KsWq48W6VgFs+lqvcX3q++DxkF1FYG7xm8poIAP7skI4AUgu1OnJOVIT+ZJmhJgHK8gJhXRWktO JF+x1rwidJGptQKYZiuiLP/pgfa0qUlLYChPHHoViF2IsLd9+TuNtQKC42gGB6eh1O0pZGuU0ap qrWK5G7igzj0KhfxWspK7O8Bng5eH0kjha9fFX22IUIZGlOGYBgYowGnTEvOVFpofEhXFRluf6d rct/9e6VRs88A75W/Twn0HNqm4QU2EVgKWqjHIZy4g1XYNoTw0R3bd7yHyINYmGc31BMm0lDDin xpwAH4IfcI9R3CP/lCJnTPx4q3 X-Google-Smtp-Source: AGHT+IGE0ZjscP742EdyKh9uzu23uLZ3l6gi0agALTuRuYrf+rNDfvZsYNVo2p4Uu5MAT2Id6+WoIQ== X-Received: by 2002:a05:6000:1447:b0:3ec:dd12:54d3 with SMTP id ffacd0b85a97d-42cc1d0c37dmr9157966f8f.35.1763938313681; Sun, 23 Nov 2025 14:51:53 -0800 (PST) Received: from 127.mynet ([2a01:4b00:bd21:4f00:7cc6:d3ca:494:116c]) by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-42cb7fb9190sm24849064f8f.33.2025.11.23.14.51.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 23 Nov 2025 14:51:52 -0800 (PST) From: Pavel Begunkov To: linux-block@vger.kernel.org, io-uring@vger.kernel.org Cc: Vishal Verma , tushar.gohad@intel.com, Keith Busch , Jens Axboe , Christoph Hellwig , Sagi Grimberg , Alexander Viro , Christian Brauner , Andrew Morton , Sumit Semwal , =?UTF-8?q?Christian=20K=C3=B6nig?= , Pavel Begunkov , linux-kernel@vger.kernel.org, linux-nvme@lists.infradead.org, linux-fsdevel@vger.kernel.org, linux-media@vger.kernel.org, dri-devel@lists.freedesktop.org, linaro-mm-sig@lists.linaro.org Subject: [RFC v2 06/11] nvme-pci: add support for dmabuf reggistration Date: Sun, 23 Nov 2025 22:51:26 +0000 Message-ID: <9bc25f46d2116436d73140cd8e8554576de2caca.1763725388.git.asml.silence@gmail.com> X-Mailer: git-send-email 2.52.0 In-Reply-To: References: Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Implement dma-token related callbacks for nvme block devices. Signed-off-by: Pavel Begunkov --- drivers/nvme/host/pci.c | 95 +++++++++++++++++++++++++++++++++++++++++ 1 file changed, 95 insertions(+) diff --git a/drivers/nvme/host/pci.c b/drivers/nvme/host/pci.c index e5ca8301bb8b..63e03c3dc044 100644 --- a/drivers/nvme/host/pci.c +++ b/drivers/nvme/host/pci.c @@ -27,6 +27,7 @@ #include #include #include +#include =20 #include "trace.h" #include "nvme.h" @@ -482,6 +483,92 @@ static void nvme_release_descriptor_pools(struct nvme_= dev *dev) } } =20 +static void nvme_dmabuf_move_notify(struct dma_buf_attachment *attach) +{ + blk_mq_dma_map_move_notify(attach->importer_priv); +} + +const struct dma_buf_attach_ops nvme_dmabuf_importer_ops =3D { + .move_notify =3D nvme_dmabuf_move_notify, + .allow_peer2peer =3D true, +}; + +static int nvme_init_dma_token(struct request_queue *q, + struct blk_mq_dma_token *token) +{ + struct dma_buf_attachment *attach; + struct nvme_ns *ns =3D q->queuedata; + struct nvme_dev *dev =3D to_nvme_dev(ns->ctrl); + struct dma_buf *dmabuf =3D token->dmabuf; + + if (dmabuf->size % NVME_CTRL_PAGE_SIZE) + return -EINVAL; + + attach =3D dma_buf_dynamic_attach(dmabuf, dev->dev, + &nvme_dmabuf_importer_ops, token); + if (IS_ERR(attach)) + return PTR_ERR(attach); + + token->private =3D attach; + return 0; +} + +static void nvme_clean_dma_token(struct request_queue *q, + struct blk_mq_dma_token *token) +{ + struct dma_buf_attachment *attach =3D token->private; + + dma_buf_detach(token->dmabuf, attach); +} + +static int nvme_dma_map(struct request_queue *q, struct blk_mq_dma_map *ma= p) +{ + struct blk_mq_dma_token *token =3D map->token; + struct dma_buf_attachment *attach =3D token->private; + unsigned nr_entries; + unsigned long tmp, i =3D 0; + struct scatterlist *sg; + struct sg_table *sgt; + dma_addr_t *dma_list; + + nr_entries =3D token->dmabuf->size / NVME_CTRL_PAGE_SIZE; + dma_list =3D kmalloc_array(nr_entries, sizeof(dma_list[0]), GFP_KERNEL); + if (!dma_list) + return -ENOMEM; + + sgt =3D dma_buf_map_attachment(attach, token->dir); + if (IS_ERR(sgt)) { + kfree(dma_list); + return PTR_ERR(sgt); + } + map->sgt =3D sgt; + + for_each_sgtable_dma_sg(sgt, sg, tmp) { + dma_addr_t dma =3D sg_dma_address(sg); + unsigned long sg_len =3D sg_dma_len(sg); + + while (sg_len) { + dma_list[i++] =3D dma; + dma +=3D NVME_CTRL_PAGE_SIZE; + sg_len -=3D NVME_CTRL_PAGE_SIZE; + } + } + + map->private =3D dma_list; + return 0; +} + +static void nvme_dma_unmap(struct request_queue *q, struct blk_mq_dma_map = *map) +{ + struct blk_mq_dma_token *token =3D map->token; + struct dma_buf_attachment *attach =3D token->private; + dma_addr_t *dma_list =3D map->private; + + dma_buf_unmap_attachment_unlocked(attach, map->sgt, token->dir); + map->sgt =3D NULL; + kfree(dma_list); +} + static int nvme_init_hctx_common(struct blk_mq_hw_ctx *hctx, void *data, unsigned qid) { @@ -1067,6 +1154,9 @@ static blk_status_t nvme_map_data(struct request *req) struct blk_dma_iter iter; blk_status_t ret; =20 + if (req->bio && bio_flagged(req->bio, BIO_DMA_TOKEN)) + return BLK_STS_RESOURCE; + /* * Try to skip the DMA iterator for single segment requests, as that * significantly improves performances for small I/O sizes. @@ -2093,6 +2183,11 @@ static const struct blk_mq_ops nvme_mq_ops =3D { .map_queues =3D nvme_pci_map_queues, .timeout =3D nvme_timeout, .poll =3D nvme_poll, + + .dma_map =3D nvme_dma_map, + .dma_unmap =3D nvme_dma_unmap, + .init_dma_token =3D nvme_init_dma_token, + .clean_dma_token =3D nvme_clean_dma_token, }; =20 static void nvme_dev_remove_admin(struct nvme_dev *dev) --=20 2.52.0