From nobody Tue Dec 2 02:04:07 2025 Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 37BAF26D4C3 for ; Fri, 21 Nov 2025 01:59:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=170.10.133.124 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1763690390; cv=none; b=BLDGFxz1lKzwG5X60suJe6HUcmk//qtje40k1fVPAS/zMXps3jA31Aod+wrKsMepn+mcl/LBXLcsoV13PIGVnFpsjYZolbOWS24lYvEMJ8/NeS0bV+dmdL+iE6E5aLMRty2YxkLm1sNIf7IyPF6S2/4JLXO8zLtI2/nayi3Sw3k= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1763690390; c=relaxed/simple; bh=vJbcbVppBF99msYQHrhYPGXsAVIj9lSL88Qn9OcaJuE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Bsk1sBUAbBOaEiDcix+kI6KzELR4ve78Y2zGz7IfiDLmnpfxpzCmqJIrW+YGUOUucSDoL+8ZujiHIIYBMDpdQvkgW5Pm1d1pdtbx7L4L9rgk/3qSzKC9dptabPrWTn5Wz9f+1BplHIvUOvVp7CKQ/d8q9QPwwxZdtx81q9NS110= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com; spf=pass smtp.mailfrom=redhat.com; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b=d4kVsqnU; arc=none smtp.client-ip=170.10.133.124 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=redhat.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=redhat.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=redhat.com header.i=@redhat.com header.b="d4kVsqnU" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1763690388; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=klVmPWaEAkDVd48+5YP4HWMZITdL1v6A60yfa5YgneE=; b=d4kVsqnU4ZWzbRjHmOF6Z42B6DEZ1YL+9I9vrWPN6aSU3l01BoRrsO5U53DdXluXk5lhZB z+3CFzxqlHzRjLLfljinzb2x7MX9umfMfaMxQYDYfbqtwvDn4hTdFjbunQ9yhYfx+kmQep j6/2W4i07qlXozZ2qSbHengBjqyoGzc= Received: from mx-prod-mc-08.mail-002.prod.us-west-2.aws.redhat.com (ec2-35-165-154-97.us-west-2.compute.amazonaws.com [35.165.154.97]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-262-yh949aweNU-V_yg0svoUqQ-1; Thu, 20 Nov 2025 20:59:46 -0500 X-MC-Unique: yh949aweNU-V_yg0svoUqQ-1 X-Mimecast-MFC-AGG-ID: yh949aweNU-V_yg0svoUqQ_1763690384 Received: from mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.4]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-08.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id C6A4418002C4; Fri, 21 Nov 2025 01:59:44 +0000 (UTC) Received: from localhost (unknown [10.72.116.211]) by mx-prod-int-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id C6B9930044E7; Fri, 21 Nov 2025 01:59:43 +0000 (UTC) From: Ming Lei To: Jens Axboe , linux-block@vger.kernel.org Cc: Caleb Sander Mateos , Uday Shankar , Stefani Seibold , Andrew Morton , linux-kernel@vger.kernel.org, Ming Lei Subject: [PATCH V4 10/27] ublk: handle UBLK_U_IO_PREP_IO_CMDS Date: Fri, 21 Nov 2025 09:58:32 +0800 Message-ID: <20251121015851.3672073-11-ming.lei@redhat.com> In-Reply-To: <20251121015851.3672073-1-ming.lei@redhat.com> References: <20251121015851.3672073-1-ming.lei@redhat.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.4 Content-Type: text/plain; charset="utf-8" This commit implements the handling of the UBLK_U_IO_PREP_IO_CMDS command, which allows userspace to prepare a batch of I/O requests. The core of this change is the `ublk_walk_cmd_buf` function, which iterates over the elements in the uring_cmd fixed buffer. For each element, it parses the I/O details, finds the corresponding `ublk_io` structure, and prepares = it for future dispatch. Add per-io lock for protecting concurrent delivery and committing. Signed-off-by: Ming Lei Reviewed-by: Caleb Sander Mateos --- drivers/block/ublk_drv.c | 193 +++++++++++++++++++++++++++++++++- include/uapi/linux/ublk_cmd.h | 5 + 2 files changed, 197 insertions(+), 1 deletion(-) diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c index 21890947ceec..66c77daae955 100644 --- a/drivers/block/ublk_drv.c +++ b/drivers/block/ublk_drv.c @@ -117,6 +117,7 @@ struct ublk_batch_io_data { struct ublk_device *ub; struct io_uring_cmd *cmd; struct ublk_batch_io header; + unsigned int issue_flags; }; =20 /* @@ -201,6 +202,7 @@ struct ublk_io { unsigned task_registered_buffers; =20 void *buf_ctx_handle; + spinlock_t lock; } ____cacheline_aligned_in_smp; =20 struct ublk_queue { @@ -270,6 +272,16 @@ static inline bool ublk_dev_support_batch_io(const str= uct ublk_device *ub) return false; } =20 +static inline void ublk_io_lock(struct ublk_io *io) +{ + spin_lock(&io->lock); +} + +static inline void ublk_io_unlock(struct ublk_io *io) +{ + spin_unlock(&io->lock); +} + static inline struct ublksrv_io_desc * ublk_get_iod(const struct ublk_queue *ubq, unsigned tag) { @@ -2531,6 +2543,171 @@ static int ublk_ch_uring_cmd(struct io_uring_cmd *c= md, unsigned int issue_flags) return ublk_ch_uring_cmd_local(cmd, issue_flags); } =20 +static inline __u64 ublk_batch_buf_addr(const struct ublk_batch_io *uc, + const struct ublk_elem_header *elem) +{ + const void *buf =3D elem; + + if (uc->flags & UBLK_BATCH_F_HAS_BUF_ADDR) + return *(__u64 *)(buf + sizeof(*elem)); + return 0; +} + +static struct ublk_auto_buf_reg +ublk_batch_auto_buf_reg(const struct ublk_batch_io *uc, + const struct ublk_elem_header *elem) +{ + struct ublk_auto_buf_reg reg =3D { + .index =3D elem->buf_index, + .flags =3D (uc->flags & UBLK_BATCH_F_AUTO_BUF_REG_FALLBACK) ? + UBLK_AUTO_BUF_REG_FALLBACK : 0, + }; + + return reg; +} + +/* + * 48 can hold any type of buffer element(8, 16 and 24 bytes) because + * it is the least common multiple(LCM) of 8, 16 and 24 + */ +#define UBLK_CMD_BATCH_TMP_BUF_SZ (48 * 10) +struct ublk_batch_io_iter { + void __user *uaddr; + unsigned done, total; + unsigned char elem_bytes; + /* copy to this buffer from user space */ + unsigned char buf[UBLK_CMD_BATCH_TMP_BUF_SZ]; +}; + +static inline int +__ublk_walk_cmd_buf(struct ublk_queue *ubq, + struct ublk_batch_io_iter *iter, + const struct ublk_batch_io_data *data, + unsigned bytes, + int (*cb)(struct ublk_queue *q, + const struct ublk_batch_io_data *data, + const struct ublk_elem_header *elem)) +{ + unsigned int i; + int ret =3D 0; + + for (i =3D 0; i < bytes; i +=3D iter->elem_bytes) { + const struct ublk_elem_header *elem =3D + (const struct ublk_elem_header *)&iter->buf[i]; + + if (unlikely(elem->tag >=3D data->ub->dev_info.queue_depth)) { + ret =3D -EINVAL; + break; + } + + ret =3D cb(ubq, data, elem); + if (unlikely(ret)) + break; + } + + iter->done +=3D i; + return ret; +} + +static int ublk_walk_cmd_buf(struct ublk_batch_io_iter *iter, + const struct ublk_batch_io_data *data, + int (*cb)(struct ublk_queue *q, + const struct ublk_batch_io_data *data, + const struct ublk_elem_header *elem)) +{ + struct ublk_queue *ubq =3D ublk_get_queue(data->ub, data->header.q_id); + int ret =3D 0; + + while (iter->done < iter->total) { + unsigned int len =3D min(sizeof(iter->buf), iter->total - iter->done); + + if (copy_from_user(iter->buf, iter->uaddr + iter->done, len)) { + pr_warn("ublk%d: read batch cmd buffer failed\n", + data->ub->dev_info.dev_id); + return -EFAULT; + } + + ret =3D __ublk_walk_cmd_buf(ubq, iter, data, len, cb); + if (ret) + return ret; + } + return 0; +} + +static int ublk_batch_unprep_io(struct ublk_queue *ubq, + const struct ublk_batch_io_data *data, + const struct ublk_elem_header *elem) +{ + struct ublk_io *io =3D &ubq->ios[elem->tag]; + + data->ub->nr_io_ready--; + ublk_io_lock(io); + io->flags =3D 0; + ublk_io_unlock(io); + return 0; +} + +static void ublk_batch_revert_prep_cmd(struct ublk_batch_io_iter *iter, + const struct ublk_batch_io_data *data) +{ + int ret; + + /* Re-process only what we've already processed, starting from beginning = */ + iter->total =3D iter->done; + iter->done =3D 0; + + ret =3D ublk_walk_cmd_buf(iter, data, ublk_batch_unprep_io); + WARN_ON_ONCE(ret); +} + +static int ublk_batch_prep_io(struct ublk_queue *ubq, + const struct ublk_batch_io_data *data, + const struct ublk_elem_header *elem) +{ + struct ublk_io *io =3D &ubq->ios[elem->tag]; + const struct ublk_batch_io *uc =3D &data->header; + union ublk_io_buf buf =3D { 0 }; + int ret; + + if (ublk_dev_support_auto_buf_reg(data->ub)) + buf.auto_reg =3D ublk_batch_auto_buf_reg(uc, elem); + else if (ublk_dev_need_map_io(data->ub)) { + buf.addr =3D ublk_batch_buf_addr(uc, elem); + + ret =3D ublk_check_fetch_buf(data->ub, buf.addr); + if (ret) + return ret; + } + + ublk_io_lock(io); + ret =3D __ublk_fetch(data->cmd, data->ub, io); + if (!ret) + io->buf =3D buf; + ublk_io_unlock(io); + + return ret; +} + +static int ublk_handle_batch_prep_cmd(const struct ublk_batch_io_data *dat= a) +{ + const struct ublk_batch_io *uc =3D &data->header; + struct io_uring_cmd *cmd =3D data->cmd; + struct ublk_batch_io_iter iter =3D { + .uaddr =3D u64_to_user_ptr(READ_ONCE(cmd->sqe->addr)), + .total =3D uc->nr_elem * uc->elem_bytes, + .elem_bytes =3D uc->elem_bytes, + }; + int ret; + + mutex_lock(&data->ub->mutex); + ret =3D ublk_walk_cmd_buf(&iter, data, ublk_batch_prep_io); + + if (ret && iter.done) + ublk_batch_revert_prep_cmd(&iter, data); + mutex_unlock(&data->ub->mutex); + return ret; +} + static int ublk_check_batch_cmd_flags(const struct ublk_batch_io *uc) { unsigned elem_bytes =3D sizeof(struct ublk_elem_header); @@ -2587,6 +2764,7 @@ static int ublk_ch_batch_io_uring_cmd(struct io_uring= _cmd *cmd, .nr_elem =3D READ_ONCE(uc->nr_elem), .elem_bytes =3D READ_ONCE(uc->elem_bytes), }, + .issue_flags =3D issue_flags, }; u32 cmd_op =3D cmd->cmd_op; int ret =3D -EINVAL; @@ -2596,6 +2774,11 @@ static int ublk_ch_batch_io_uring_cmd(struct io_urin= g_cmd *cmd, =20 switch (cmd_op) { case UBLK_U_IO_PREP_IO_CMDS: + ret =3D ublk_check_batch_cmd(&data); + if (ret) + goto out; + ret =3D ublk_handle_batch_prep_cmd(&data); + break; case UBLK_U_IO_COMMIT_IO_CMDS: ret =3D ublk_check_batch_cmd(&data); if (ret) @@ -2770,7 +2953,7 @@ static int ublk_init_queue(struct ublk_device *ub, in= t q_id) struct ublk_queue *ubq; struct page *page; int numa_node; - int size; + int size, i; =20 /* Determine NUMA node based on queue's CPU affinity */ numa_node =3D ublk_get_queue_numa_node(ub, q_id); @@ -2795,6 +2978,9 @@ static int ublk_init_queue(struct ublk_device *ub, in= t q_id) } ubq->io_cmd_buf =3D page_address(page); =20 + for (i =3D 0; i < ubq->q_depth; i++) + spin_lock_init(&ubq->ios[i].lock); + ub->queues[q_id] =3D ubq; ubq->dev =3D ub; return 0; @@ -3021,6 +3207,11 @@ static int ublk_ctrl_start_dev(struct ublk_device *u= b, return -EINVAL; =20 mutex_lock(&ub->mutex); + /* device may become not ready in case of F_BATCH */ + if (!ublk_dev_ready(ub)) { + ret =3D -EINVAL; + goto out_unlock; + } if (ub->dev_info.state =3D=3D UBLK_S_DEV_LIVE || test_bit(UB_STATE_USED, &ub->state)) { ret =3D -EEXIST; diff --git a/include/uapi/linux/ublk_cmd.h b/include/uapi/linux/ublk_cmd.h index 2ce5a496b622..c96c299057c3 100644 --- a/include/uapi/linux/ublk_cmd.h +++ b/include/uapi/linux/ublk_cmd.h @@ -102,6 +102,11 @@ _IOWR('u', 0x23, struct ublksrv_io_cmd) #define UBLK_U_IO_UNREGISTER_IO_BUF \ _IOWR('u', 0x24, struct ublksrv_io_cmd) + +/* + * return 0 if the command is run successfully, otherwise failure code + * is returned + */ #define UBLK_U_IO_PREP_IO_CMDS \ _IOWR('u', 0x25, struct ublk_batch_io) #define UBLK_U_IO_COMMIT_IO_CMDS \ --=20 2.47.0