From nobody Tue Feb 10 06:29:01 2026 Received: from mail-yb1-f226.google.com (mail-yb1-f226.google.com [209.85.219.226]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5C81920468D for ; Wed, 16 Apr 2025 01:00:11 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.219.226 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744765214; cv=none; b=UrmQ5kUbAbCVmOU+UOTSclEiTElz+STGjZMrQ4ouS6KsXBdpwJXdsBRcgr+YwC7Lckw+cPBSuXxSGcDu4q7cRBt0X5uQCeKhVoN2l08FcmKZh7CJa6NSBNZRuMVJRgzENb0dvDWa3MDyD+/reUhoOoMmx/vMbG+xA1zsqAhApbw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744765214; c=relaxed/simple; bh=sfK3HXHKQ+vsF7uU0m2uSfGubbYYOuDDGvuSqM8XGkY=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=FWKz752rJY9rKBt6da0WAm4kmECQL1YLSu+D6l4KBGNITD3dHLD/vy6UXzyN6LvKaVYLKkY0i5LDlX0LgptT354fCLbOhyCoITPBflOOjeEEvK3PyT/B30Z54rba4iElgM5ctyaLyQNzincBLoSJGG79CT79xpAwuC6Jo53vXdQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=purestorage.com; spf=fail smtp.mailfrom=purestorage.com; dkim=pass (2048-bit key) header.d=purestorage.com header.i=@purestorage.com header.b=LIoezgcW; arc=none smtp.client-ip=209.85.219.226 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=purestorage.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=purestorage.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=purestorage.com header.i=@purestorage.com header.b="LIoezgcW" Received: by mail-yb1-f226.google.com with SMTP id 3f1490d57ef6-e702cffe590so4629858276.1 for ; Tue, 15 Apr 2025 18:00:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=purestorage.com; s=google2022; t=1744765210; x=1745370010; darn=vger.kernel.org; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=PVNNZ8nNV1VeIw2zvsjmKAcduOkjz1KfbzQYh61/aXM=; b=LIoezgcWXwBgHXA6vmmJpnF9QTI2KF/mpZOcQCLD5cXwLioOFCYrawm2qpeCeg3zlc sJ9qJR71cEZqSkm2tn/sF7eT3QGWQSMEIodq+usDOE24c4DCi357lLZLtKzYDgrpf2v6 No1wxL9l24HlRSTpV3oW6oA1X1skyDyEG8eZ45UjmfbVl45/2ojMDD5mr3Nqxh/q93DD EHUWuy6C5G1fThH6U4N01JvQprxLfYvLL3iFVJXfxWZGyADyiYwsvivt6HsufYma8TJM akn2R4QM7dlM1Zq8xv71q9H8NhnOP3lobbzleqqbleCZveizVjcTWcjFhIlr+ckLDcgk VPqQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1744765210; x=1745370010; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=PVNNZ8nNV1VeIw2zvsjmKAcduOkjz1KfbzQYh61/aXM=; b=l/+kQde4BwlGHB+bYMh7417Tzlhzy6tqHW3xBKJsQXfqScQ8o1ihdCeYvZVdph7unm iRC/HbI5laqfLRja8e1HPpObqM8iTzUl1udCt3fEBNQvFe86NOSeotYI7JMhWASGvXGt OVDBxJt7zgqyTGqm+OSlpPZr2RaWht2c9yp83cS/E0cKmGRlolAUlT2OO7tRcfgIteT9 PikZyLymdnuOQ2M6SebP7Ht6BTN9JCJeL5OrG5SZk3/85AMycEyPFEfVP5/njK/7mMTQ lNxGJRaJsG+5j2K82FIYH65HPhTFPKTqcRcBc9+ZnPxRs4kaJ5vuS+JFgVacy1NsyxVG Qi2w== X-Forwarded-Encrypted: i=1; AJvYcCWyRV5QV2rwMoUlULtJ2E6IA0k7dtuG1ApZM1pzBCSLcj7ymO+r917ry6nDRJv62Gb8Rk5t5Qk4oYTDxsM=@vger.kernel.org X-Gm-Message-State: AOJu0YxErDcxviU3ekBNMl6yOk1rD+mnuBmeeoc8oGbj96W0I5mvMmDy dulyUygdn/o9a+O8wO6GQhjuCXeFGRRA4Yp3k/OfzvEFIysDt57UsS21PU0ksMzTvQRZViel16R unvvOREjYK5zkX75IdxN2OnnVY/6sDXOq X-Gm-Gg: ASbGncukXunqPOIEB8GRrCU4V+u7Knhmf+DNlbBCmfPKBtgUhxU1rCqdaEptLOWwD4e 3F1wJBxBMKlOfwW+SAXVLg+ErPyX1QWWqkYQ/JJujj3uRNDcW/gDEsR9d8Qau/A9KXcs4IsXIeG cyrQTYeFRghqiQ1k4miYT3u3SrMwi5NDG8kyTb9xLFZPXoxcmEiQoPYDe0yKw0Ak1qU7WvkuO23 1puQwFqBDx7rc+GQ04L07muW9O3l7Pc1Uib2VWlMcVYc/DC+OjGR3xBvS1pxrVq4bpIxpfchojh epGrrGCBSthM0qFyn4T6XyVNFqBt9imuzD7aDddNNywL1Q== X-Google-Smtp-Source: AGHT+IHT2/nszmvKTAlkI/lLHpEoLtoU6fdUwc06HofmppuXRb6QZJ46E3HhXJfyV87/DEtapoejWU1kixdH X-Received: by 2002:a05:6902:18d4:b0:e5b:4482:a4f7 with SMTP id 3f1490d57ef6-e726e644c5fmr2101574276.12.1744765209351; Tue, 15 Apr 2025 18:00:09 -0700 (PDT) Received: from c7-smtp-2023.dev.purestorage.com ([208.88.159.129]) by smtp-relay.gmail.com with ESMTPS id 3f1490d57ef6-e703245aac1sm399992276.9.2025.04.15.18.00.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 15 Apr 2025 18:00:09 -0700 (PDT) X-Relaying-Domain: purestorage.com Received: from dev-ushankar.dev.purestorage.com (dev-ushankar.dev.purestorage.com [10.7.70.36]) by c7-smtp-2023.dev.purestorage.com (Postfix) with ESMTP id 4DA7F3404A1; Tue, 15 Apr 2025 19:00:08 -0600 (MDT) Received: by dev-ushankar.dev.purestorage.com (Postfix, from userid 1557716368) id F05B4E40318; Tue, 15 Apr 2025 19:00:07 -0600 (MDT) From: Uday Shankar Date: Tue, 15 Apr 2025 18:59:37 -0600 Subject: [PATCH v4 1/4] ublk: require unique task per io instead of unique task per hctx Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20250415-ublk_task_per_io-v4-1-54210b91a46f@purestorage.com> References: <20250415-ublk_task_per_io-v4-0-54210b91a46f@purestorage.com> In-Reply-To: <20250415-ublk_task_per_io-v4-0-54210b91a46f@purestorage.com> To: Ming Lei , Jens Axboe , Caleb Sander Mateos Cc: linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, Uday Shankar X-Mailer: b4 0.14.2 Currently, ublk_drv associates to each hardware queue (hctx) a unique task (called the queue's ubq_daemon) which is allowed to issue COMMIT_AND_FETCH commands against the hctx. If any other task attempts to do so, the command fails immediately with EINVAL. When considered together with the block layer architecture, the result is that for each CPU C on the system, there is a unique ublk server thread which is allowed to handle I/O submitted on CPU C. This can lead to suboptimal performance under imbalanced load generation. For an extreme example, suppose all the load is generated on CPUs mapping to a single ublk server thread. Then that thread may be fully utilized and become the bottleneck in the system, while other ublk server threads are totally idle. This issue can also be addressed directly in the ublk server without kernel support by having threads dequeue I/Os and pass them around to ensure even load. But this solution requires inter-thread communication at least twice for each I/O (submission and completion), which is generally a bad pattern for performance. The problem gets even worse with zero copy, as more inter-thread communication would be required to have the buffer register/unregister calls to come from the correct thread. Therefore, address this issue in ublk_drv by requiring a unique task per I/O instead of per queue/hctx. Imbalanced load can then be balanced across all ublk server threads by having threads issue FETCH_REQs in a round-robin manner. As a small toy example, consider a system with a single ublk device having 2 queues, each of queue depth 4. A ublk server having 4 threads could issue its FETCH_REQs against this device as follows (where each entry is the qid,tag pair that the FETCH_REQ targets): poller thread: T0 T1 T2 T3 0,0 0,1 0,2 0,3 1,3 1,0 1,1 1,2 Since tags appear to be allocated in sequential chunks, this setup provides a rough approximation to distributing I/Os round-robin across all ublk server threads, while letting I/Os stay fully thread-local. Signed-off-by: Uday Shankar Reviewed-by: Caleb Sander Mateos --- drivers/block/ublk_drv.c | 75 ++++++++++++++++++++++----------------------= ---- 1 file changed, 34 insertions(+), 41 deletions(-) diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c index cdb1543fa4a9817aa2ca2fca66720f589cf222be..9a0d2547512fc81194607392305= 99d48d2c2a306 100644 --- a/drivers/block/ublk_drv.c +++ b/drivers/block/ublk_drv.c @@ -150,6 +150,7 @@ struct ublk_io { int res; =20 struct io_uring_cmd *cmd; + struct task_struct *task; }; =20 struct ublk_queue { @@ -157,11 +158,9 @@ struct ublk_queue { int q_depth; =20 unsigned long flags; - struct task_struct *ubq_daemon; struct ublksrv_io_desc *io_cmd_buf; =20 bool force_abort; - bool timeout; bool canceling; bool fail_io; /* copy of dev->state =3D=3D UBLK_S_DEV_FAIL_IO */ unsigned short nr_io_ready; /* how many ios setup */ @@ -1072,11 +1071,6 @@ static inline struct ublk_uring_cmd_pdu *ublk_get_ur= ing_cmd_pdu( return io_uring_cmd_to_pdu(ioucmd, struct ublk_uring_cmd_pdu); } =20 -static inline bool ubq_daemon_is_dying(struct ublk_queue *ubq) -{ - return ubq->ubq_daemon->flags & PF_EXITING; -} - /* todo: handle partial completion */ static inline void __ublk_complete_rq(struct request *req) { @@ -1224,13 +1218,13 @@ static void ublk_dispatch_req(struct ublk_queue *ub= q, /* * Task is exiting if either: * - * (1) current !=3D ubq_daemon. + * (1) current !=3D io->task. * io_uring_cmd_complete_in_task() tries to run task_work - * in a workqueue if ubq_daemon(cmd's task) is PF_EXITING. + * in a workqueue if cmd's task is PF_EXITING. * * (2) current->flags & PF_EXITING. */ - if (unlikely(current !=3D ubq->ubq_daemon || current->flags & PF_EXITING)= ) { + if (unlikely(current !=3D io->task || current->flags & PF_EXITING)) { __ublk_abort_rq(ubq, req); return; } @@ -1336,23 +1330,20 @@ static void ublk_queue_cmd_list(struct ublk_queue *= ubq, struct rq_list *l) static enum blk_eh_timer_return ublk_timeout(struct request *rq) { struct ublk_queue *ubq =3D rq->mq_hctx->driver_data; + struct ublk_io *io =3D &ubq->ios[rq->tag]; unsigned int nr_inflight =3D 0; int i; =20 if (ubq->flags & UBLK_F_UNPRIVILEGED_DEV) { - if (!ubq->timeout) { - send_sig(SIGKILL, ubq->ubq_daemon, 0); - ubq->timeout =3D true; - } - + send_sig(SIGKILL, io->task, 0); return BLK_EH_DONE; } =20 - if (!ubq_daemon_is_dying(ubq)) + if (!(io->task->flags & PF_EXITING)) return BLK_EH_RESET_TIMER; =20 for (i =3D 0; i < ubq->q_depth; i++) { - struct ublk_io *io =3D &ubq->ios[i]; + io =3D &ubq->ios[i]; =20 if (!(io->flags & UBLK_IO_FLAG_ACTIVE)) nr_inflight++; @@ -1552,8 +1543,8 @@ static void ublk_commit_completion(struct ublk_device= *ub, } =20 /* - * Called from ubq_daemon context via cancel fn, meantime quiesce ublk - * blk-mq queue, so we are called exclusively with blk-mq and ubq_daemon + * Called from io task context via cancel fn, meantime quiesce ublk + * blk-mq queue, so we are called exclusively with blk-mq and io task * context, so everything is serialized. */ static void ublk_abort_queue(struct ublk_device *ub, struct ublk_queue *ub= q) @@ -1669,13 +1660,13 @@ static void ublk_uring_cmd_cancel_fn(struct io_urin= g_cmd *cmd, return; =20 task =3D io_uring_cmd_get_task(cmd); - if (WARN_ON_ONCE(task && task !=3D ubq->ubq_daemon)) + io =3D &ubq->ios[pdu->tag]; + if (WARN_ON_ONCE(task && task !=3D io->task)) return; =20 ub =3D ubq->dev; need_schedule =3D ublk_abort_requests(ub, ubq); =20 - io =3D &ubq->ios[pdu->tag]; WARN_ON_ONCE(io->cmd !=3D cmd); ublk_cancel_cmd(ubq, io, issue_flags); =20 @@ -1836,8 +1827,6 @@ static void ublk_mark_io_ready(struct ublk_device *ub= , struct ublk_queue *ubq) mutex_lock(&ub->mutex); ubq->nr_io_ready++; if (ublk_queue_ready(ubq)) { - ubq->ubq_daemon =3D current; - get_task_struct(ubq->ubq_daemon); ub->nr_queues_ready++; =20 if (capable(CAP_SYS_ADMIN)) @@ -1952,14 +1941,14 @@ static int __ublk_ch_uring_cmd(struct io_uring_cmd = *cmd, if (!ubq || ub_cmd->q_id !=3D ubq->q_id) goto out; =20 - if (ubq->ubq_daemon && ubq->ubq_daemon !=3D current) - goto out; - if (tag >=3D ubq->q_depth) goto out; =20 io =3D &ubq->ios[tag]; =20 + if (io->task && io->task !=3D current) + goto out; + /* there is pending io cmd, something must be wrong */ if (io->flags & UBLK_IO_FLAG_ACTIVE) { ret =3D -EBUSY; @@ -2012,6 +2001,7 @@ static int __ublk_ch_uring_cmd(struct io_uring_cmd *c= md, =20 ublk_fill_io_cmd(io, cmd, ub_cmd->addr); ublk_mark_io_ready(ub, ubq); + io->task =3D get_task_struct(current); break; case UBLK_IO_COMMIT_AND_FETCH_REQ: req =3D blk_mq_tag_to_rq(ub->tag_set.tags[ub_cmd->q_id], tag); @@ -2248,9 +2238,15 @@ static void ublk_deinit_queue(struct ublk_device *ub= , int q_id) { int size =3D ublk_queue_cmd_buf_size(ub, q_id); struct ublk_queue *ubq =3D ublk_get_queue(ub, q_id); + struct ublk_io *io; + int i; + + for (i =3D 0; i < ubq->q_depth; i++) { + io =3D &ubq->ios[i]; + if (io->task) + put_task_struct(io->task); + } =20 - if (ubq->ubq_daemon) - put_task_struct(ubq->ubq_daemon); if (ubq->io_cmd_buf) free_pages((unsigned long)ubq->io_cmd_buf, get_order(size)); } @@ -2936,15 +2932,8 @@ static void ublk_queue_reinit(struct ublk_device *ub= , struct ublk_queue *ubq) { int i; =20 - WARN_ON_ONCE(!(ubq->ubq_daemon && ubq_daemon_is_dying(ubq))); - /* All old ioucmds have to be completed */ ubq->nr_io_ready =3D 0; - /* old daemon is PF_EXITING, put it now */ - put_task_struct(ubq->ubq_daemon); - /* We have to reset it to NULL, otherwise ub won't accept new FETCH_REQ */ - ubq->ubq_daemon =3D NULL; - ubq->timeout =3D false; ubq->canceling =3D false; =20 for (i =3D 0; i < ubq->q_depth; i++) { @@ -2954,6 +2943,10 @@ static void ublk_queue_reinit(struct ublk_device *ub= , struct ublk_queue *ubq) io->flags =3D 0; io->cmd =3D NULL; io->addr =3D 0; + + WARN_ON_ONCE(!(io->task && (io->task->flags & PF_EXITING))); + put_task_struct(io->task); + io->task =3D NULL; } } =20 @@ -2993,7 +2986,7 @@ static int ublk_ctrl_start_recovery(struct ublk_devic= e *ub, pr_devel("%s: start recovery for dev id %d.\n", __func__, header->dev_id); for (i =3D 0; i < ub->dev_info.nr_hw_queues; i++) ublk_queue_reinit(ub, ublk_get_queue(ub, i)); - /* set to NULL, otherwise new ubq_daemon cannot mmap the io_cmd_buf */ + /* set to NULL, otherwise new tasks cannot mmap the io_cmd_buf */ ub->mm =3D NULL; ub->nr_queues_ready =3D 0; ub->nr_privileged_daemon =3D 0; @@ -3011,14 +3004,14 @@ static int ublk_ctrl_end_recovery(struct ublk_devic= e *ub, int ret =3D -EINVAL; int i; =20 - pr_devel("%s: Waiting for new ubq_daemons(nr: %d) are ready, dev id %d...= \n", - __func__, ub->dev_info.nr_hw_queues, header->dev_id); - /* wait until new ubq_daemon sending all FETCH_REQ */ + pr_devel("%s: Waiting for all FETCH_REQs, dev id %d...\n", __func__, + header->dev_id); + if (wait_for_completion_interruptible(&ub->completion)) return -EINTR; =20 - pr_devel("%s: All new ubq_daemons(nr: %d) are ready, dev id %d\n", - __func__, ub->dev_info.nr_hw_queues, header->dev_id); + pr_devel("%s: All FETCH_REQs received, dev id %d\n", __func__, + header->dev_id); =20 mutex_lock(&ub->mutex); if (ublk_nosrv_should_stop_dev(ub)) --=20 2.34.1 From nobody Tue Feb 10 06:29:01 2026 Received: from mail-oa1-f98.google.com (mail-oa1-f98.google.com [209.85.160.98]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5009F204C1A for ; Wed, 16 Apr 2025 01:00:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.160.98 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744765214; cv=none; b=pZjhhokLlmDNgk0xzxSMGD5sbLoOUAcjVK5zfH/GTK5LiEwagZzbS9cxA/pEcyvBSEa+O0vvOl68IEJNQ6FvLmFfGZ61aswa1kMp/hWRdNGGbJa6876p8B12J82zeWs+KhCxEeKvR4eSMRhIRrW4e0vRYFfZ7U4PMUqBmoAUze4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744765214; c=relaxed/simple; bh=ht1z5tMmO+4FBWu5eprttGdN/Q0lnY9pm88IS3LYzgU=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=QwKUPdPripwIlHm5qPSntGoHivjbegeQAJGXbHH33Zdc+aMuPuxjaNnu5vXdUBy1CMGUSIpnZX7rp3Bngpcb3Yb4ILwvsgXhflFDlqXOUuflSJOWifp6X3lA+MNXD5E72KHJAxruxY5bbpNNBdrD7Es3wme4xFHmMUsubz/5eGQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=purestorage.com; spf=fail smtp.mailfrom=purestorage.com; dkim=pass (2048-bit key) header.d=purestorage.com header.i=@purestorage.com header.b=fNBTnJnW; arc=none smtp.client-ip=209.85.160.98 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=purestorage.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=purestorage.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=purestorage.com header.i=@purestorage.com header.b="fNBTnJnW" Received: by mail-oa1-f98.google.com with SMTP id 586e51a60fabf-2d09d495c6cso1756081fac.3 for ; Tue, 15 Apr 2025 18:00:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=purestorage.com; s=google2022; t=1744765209; x=1745370009; darn=vger.kernel.org; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=BeIb+nOqtxi+iaoqYFgPTqpRXHwAg24kReKqzI+s5Wc=; b=fNBTnJnWnvdsmDR0kdN7rjmZJ/KJcM3LeTEezJ+Z7tj8oFLWxhRiWuf5eSOQt9QcU2 rH6BCrg75G+UAhajiZpGJMyPuGPVx0AKocIbrl42KzHBXdJYozhFHrotnhLUZvwC5oa9 dM7o6KOGSwqbRN/kFa4h6u83DJqHR3Fu0oW9rzqO13AErLeLDh/l+N8Z9n9DRMlA1oKU NvmCWClEEj6QfkLE95Muhluu9Z6spTnyfHIQ93eHH2PBWBbgoGdYUEnpBT5l1cLYgblf l13qgwFskljAMYaHFzdVbJriCrEQ5Yw8Hf1z/HpiT7Zfkog1EZpFNp4WCGzsgGctYmQL qTcA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1744765209; x=1745370009; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=BeIb+nOqtxi+iaoqYFgPTqpRXHwAg24kReKqzI+s5Wc=; b=DK+Qrr2Cav/Eb/fKFvxAKwHoi1nI7E/qPR/TgLqXfscSbWORT9cPGoGcxeNvKYT9MG Ap53ai4nu2e4KR7JcFvBQF4Q4KQ+j1OZeTDn5AechNl7tBU4FEjzwc1J0ds3E7MjUY33 YdEeLqTNBdSEV6zMFnxXqFIpyIKComq3I02DYDWP7tIvh9mJ/KGRS6lZtSyYX1l5Fe7O yVnU9nMqts1ZxEgm8hOMStTGNlsSmLzrGRoIBZRLddSUwAUdSZJpS35t+fxAe1SpFdjD Ps95ojP6zq6cvOA9EVvXRAq+8IWOH4NaY9A6m1+B3qjSj7wvKP9zvmTWJazdJLUy9qUe dWQQ== X-Forwarded-Encrypted: i=1; AJvYcCUr4S7hVz/GgTXLx6N3rt6mXgouGcr9QcYg4gplSs8MxIvBvdosfcwYtSV2SNJ92f5QfNBiorkursXmPC4=@vger.kernel.org X-Gm-Message-State: AOJu0YybWcwObsbgUJjGAsSzS3l+VeqFwFREROtRTCHre3WfxAvr2seJ SplOy8CUo24ILPYv/FWCXRQfonRB6tYe42RR3Xnb4o7A1Umg7o9QFmqg2UsQ3o8k9oJxXl+hvDM y6BhlpgFRsnLYFPHgHrOuDW0kmQqag8l36ruVdyDVqQDVOo27 X-Gm-Gg: ASbGncvYFcJuye90MROnWnm1RRVuRnMPvf6mEcgaQ+uFZYjswLmyvC/ihxAz49pO2In wuPeP7FWdNAj6zSc0OXIf4zjjo7h7Oe18GE3XS+4XFoFm18iuCXydpbYyTmrJGrLk1beu0wggRJ JtDx+XgKYNzMpCq2Ob59juHVZQJ4vxuZyb0FBvm/gBGnzgac1xmPwjAYmnqA1CTBogokDPESwkB WxcgPuaT5Jy9HoSKeY1Hmhzp6Ltnwu00jmvpSrXC2LmcMf9CYSRoVi2l74MLV28L41+kATEk+KV CWwAS4KR15dLk7sQll4S9NBMacW6/sY= X-Google-Smtp-Source: AGHT+IF1lVFyFNvpYCSWetCStup9q/NCRo+l8krjI7JKuqiVsfplbUneTzIj60Z0qcSmQQT/A8J2k4XxTof6 X-Received: by 2002:a05:6870:f10f:b0:2b8:2f9c:d513 with SMTP id 586e51a60fabf-2d4c3afcb4bmr842541fac.19.1744765209591; Tue, 15 Apr 2025 18:00:09 -0700 (PDT) Received: from c7-smtp-2023.dev.purestorage.com ([2620:125:9017:12:36:3:5:0]) by smtp-relay.gmail.com with ESMTPS id 586e51a60fabf-2d096d49cddsm732785fac.38.2025.04.15.18.00.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 15 Apr 2025 18:00:09 -0700 (PDT) X-Relaying-Domain: purestorage.com Received: from dev-ushankar.dev.purestorage.com (dev-ushankar.dev.purestorage.com [IPv6:2620:125:9007:640:7:70:36:0]) by c7-smtp-2023.dev.purestorage.com (Postfix) with ESMTP id 099AF3402DD; Tue, 15 Apr 2025 19:00:08 -0600 (MDT) Received: by dev-ushankar.dev.purestorage.com (Postfix, from userid 1557716368) id 026B0E40ECA; Tue, 15 Apr 2025 19:00:08 -0600 (MDT) From: Uday Shankar Date: Tue, 15 Apr 2025 18:59:38 -0600 Subject: [PATCH v4 2/4] ublk: mark ublk_queue as const for ublk_commit_and_fetch Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20250415-ublk_task_per_io-v4-2-54210b91a46f@purestorage.com> References: <20250415-ublk_task_per_io-v4-0-54210b91a46f@purestorage.com> In-Reply-To: <20250415-ublk_task_per_io-v4-0-54210b91a46f@purestorage.com> To: Ming Lei , Jens Axboe , Caleb Sander Mateos Cc: linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, Uday Shankar X-Mailer: b4 0.14.2 We now allow multiple tasks to operate on I/Os belonging to the same queue concurrently. This means that any writes to ublk_queue in the I/O path are potential sources of data races. Try to prevent these by marking ublk_queue pointers as const when handling COMMIT_AND_FETCH. Move the logic for this command into its own function ublk_commit_and_fetch. Also open code ublk_commit_completion in ublk_commit_and_fetch to reduce the number of parameters/avoid a redundant lookup. Suggested-by: Ming Lei Signed-off-by: Uday Shankar Reviewed-by: Caleb Sander Mateos --- drivers/block/ublk_drv.c | 91 +++++++++++++++++++++++---------------------= ---- 1 file changed, 43 insertions(+), 48 deletions(-) diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c index 9a0d2547512fc8119460739230599d48d2c2a306..153f67d92248ad45bddd2437b13= 06bb23df7d1ae 100644 --- a/drivers/block/ublk_drv.c +++ b/drivers/block/ublk_drv.c @@ -1518,30 +1518,6 @@ static int ublk_ch_mmap(struct file *filp, struct vm= _area_struct *vma) return remap_pfn_range(vma, vma->vm_start, pfn, sz, vma->vm_page_prot); } =20 -static void ublk_commit_completion(struct ublk_device *ub, - const struct ublksrv_io_cmd *ub_cmd) -{ - u32 qid =3D ub_cmd->q_id, tag =3D ub_cmd->tag; - struct ublk_queue *ubq =3D ublk_get_queue(ub, qid); - struct ublk_io *io =3D &ubq->ios[tag]; - struct request *req; - - /* now this cmd slot is owned by nbd driver */ - io->flags &=3D ~UBLK_IO_FLAG_OWNED_BY_SRV; - io->res =3D ub_cmd->result; - - /* find the io request and complete */ - req =3D blk_mq_tag_to_rq(ub->tag_set.tags[qid], tag); - if (WARN_ON_ONCE(unlikely(!req))) - return; - - if (req_op(req) =3D=3D REQ_OP_ZONE_APPEND) - req->__sector =3D ub_cmd->zone_append_lba; - - if (likely(!blk_should_fake_timeout(req->q))) - ublk_put_req_ref(ubq, req); -} - /* * Called from io task context via cancel fn, meantime quiesce ublk * blk-mq queue, so we are called exclusively with blk-mq and io task @@ -1918,6 +1894,45 @@ static int ublk_unregister_io_buf(struct io_uring_cm= d *cmd, return io_buffer_unregister_bvec(cmd, index, issue_flags); } =20 +static int ublk_commit_and_fetch(const struct ublk_queue *ubq, + struct ublk_io *io, struct io_uring_cmd *cmd, + const struct ublksrv_io_cmd *ub_cmd, + struct request *req) +{ + if (!(io->flags & UBLK_IO_FLAG_OWNED_BY_SRV)) + return -EINVAL; + + if (ublk_need_map_io(ubq)) { + /* + * COMMIT_AND_FETCH_REQ has to provide IO buffer if + * NEED GET DATA is not enabled or it is Read IO. + */ + if (!ub_cmd->addr && (!ublk_need_get_data(ubq) || + req_op(req) =3D=3D REQ_OP_READ)) + return -EINVAL; + } else if (req_op(req) !=3D REQ_OP_ZONE_APPEND && ub_cmd->addr) { + /* + * User copy requires addr to be unset when command is + * not zone append + */ + return -EINVAL; + } + + ublk_fill_io_cmd(io, cmd, ub_cmd->addr); + + /* now this cmd slot is owned by ublk driver */ + io->flags &=3D ~UBLK_IO_FLAG_OWNED_BY_SRV; + io->res =3D ub_cmd->result; + + if (req_op(req) =3D=3D REQ_OP_ZONE_APPEND) + req->__sector =3D ub_cmd->zone_append_lba; + + if (likely(!blk_should_fake_timeout(req->q))) + ublk_put_req_ref(ubq, req); + + return -EIOCBQUEUED; +} + static int __ublk_ch_uring_cmd(struct io_uring_cmd *cmd, unsigned int issue_flags, const struct ublksrv_io_cmd *ub_cmd) @@ -1928,7 +1943,6 @@ static int __ublk_ch_uring_cmd(struct io_uring_cmd *c= md, u32 cmd_op =3D cmd->cmd_op; unsigned tag =3D ub_cmd->tag; int ret =3D -EINVAL; - struct request *req; =20 pr_devel("%s: received: cmd op %d queue %d tag %d result %d\n", __func__, cmd->cmd_op, ub_cmd->q_id, tag, @@ -2004,30 +2018,11 @@ static int __ublk_ch_uring_cmd(struct io_uring_cmd = *cmd, io->task =3D get_task_struct(current); break; case UBLK_IO_COMMIT_AND_FETCH_REQ: - req =3D blk_mq_tag_to_rq(ub->tag_set.tags[ub_cmd->q_id], tag); - - if (!(io->flags & UBLK_IO_FLAG_OWNED_BY_SRV)) + ret =3D ublk_commit_and_fetch( + ubq, io, cmd, ub_cmd, + blk_mq_tag_to_rq(ub->tag_set.tags[ub_cmd->q_id], tag)); + if (ret !=3D -EIOCBQUEUED) goto out; - - if (ublk_need_map_io(ubq)) { - /* - * COMMIT_AND_FETCH_REQ has to provide IO buffer if - * NEED GET DATA is not enabled or it is Read IO. - */ - if (!ub_cmd->addr && (!ublk_need_get_data(ubq) || - req_op(req) =3D=3D REQ_OP_READ)) - goto out; - } else if (req_op(req) !=3D REQ_OP_ZONE_APPEND && ub_cmd->addr) { - /* - * User copy requires addr to be unset when command is - * not zone append - */ - ret =3D -EINVAL; - goto out; - } - - ublk_fill_io_cmd(io, cmd, ub_cmd->addr); - ublk_commit_completion(ub, ub_cmd); break; case UBLK_IO_NEED_GET_DATA: if (!(io->flags & UBLK_IO_FLAG_OWNED_BY_SRV)) --=20 2.34.1 From nobody Tue Feb 10 06:29:01 2026 Received: from mail-pf1-f227.google.com (mail-pf1-f227.google.com [209.85.210.227]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C9A27204C0D for ; Wed, 16 Apr 2025 01:00:09 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.227 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744765213; cv=none; b=tc67DJtM3RqC75lU0KOj6SqAzAD9iYOx9gyvGDXfn1byOxNJE785xZcNHX3u7mVdVlkdcD1BXBEX6oSPWrX+6TJ5vAl9h70udF61t0gyYoo4rEpPhIEyELtirY6mvj+058xiVdtjYdA7uWq6rdDV+rZWCawNhHEocNBV1s81Dgo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744765213; c=relaxed/simple; bh=Cgf3ztYIvwTUzfBavcvlyHRPUbRdvIfXW7B4VyEJqiU=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=t+dLDQMslrX0jZjCX9dI84HQvmKS/HtwGfltAbF/qxlLXb3fDE8hutv7W8kEoLDBjG6WoV4mwK0B/6ItSkHci5a0YBOd0J86QGnixSGBaAkOoSycSq1W72CHULsxDsFfWUVXfzVZIlJlXiq4Bmc2Ar+KXPRIQc1cf0QyqPWp0Gs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=purestorage.com; spf=fail smtp.mailfrom=purestorage.com; dkim=pass (2048-bit key) header.d=purestorage.com header.i=@purestorage.com header.b=WZVgTC0c; arc=none smtp.client-ip=209.85.210.227 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=purestorage.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=purestorage.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=purestorage.com header.i=@purestorage.com header.b="WZVgTC0c" Received: by mail-pf1-f227.google.com with SMTP id d2e1a72fcca58-7376dd56f8fso7694369b3a.2 for ; Tue, 15 Apr 2025 18:00:09 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=purestorage.com; s=google2022; t=1744765209; x=1745370009; darn=vger.kernel.org; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=dhLhuSAM50uIVAVygGan8Ma39tavmo4MC520V/vTBOA=; b=WZVgTC0c4yBeZvk7EPZH335fLj7RiQCswlW7NzF7iRFxMVjmftyVtTxOkk5CkOgl70 635iwrihJsiBzh/B4vpdksW6v2N07Ymleh5ZTpQvGPe0iefNt4lx9/nOppU+9X0esGQE x8ZwQUXrzZkEn8o10hm0CAMgYTy0iKfQcA8JClc8qAHs8z+k7szORaUec94SFKX61RL0 osSo3aql4PUnYG7hvVjyJYaO+CgW5iSxTZaZzKmb6essQua6vSsf99yC6EYjOjjfvsJc xbna00rimmwdDaIFSZ8nYJVBK+2TDARgTKi9/JnXYkiVfrlcVHZZmnpo+nO0Ohp+hpQr pt7g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1744765209; x=1745370009; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=dhLhuSAM50uIVAVygGan8Ma39tavmo4MC520V/vTBOA=; b=mS2P0ylyqwnC0v9LG60MlNLEDIT0fQmRLFcyfRgYTkZEMvnGlG4e6SqJEMkFvXDykm taFb/YiEOlAIJ5SujYLOOzYWNeXA4+/y4xVIrpv+iJH2X04ZsgaO1C/K33KdVOSXcVjg TsBpcAY06W07/MjkyGGoc9chNPdH8VDhOy4hF69lW3nB1SdNOWiz9sCFoE0b5SusSwBg Tqzaek2gG9+OxxkbeaoUUoBOuehsWZx6kPA1aOamKqvD1pranKoM9gDtNShXb8FFJwdV 5C1huC4eorenW9gFCfv5GuBY63vCKsSWHJiiYBBpF85gDRu8PqBZGRO4CC/NjusnJNeL Z+rw== X-Forwarded-Encrypted: i=1; AJvYcCXlLPmLO+Q6K4/hpjPoKidJk/OLyJ8Sbc82hi4GL4HsGVbciY8I9EIEg0kqLhLQrWYlPZpH3TJX3AGM3jI=@vger.kernel.org X-Gm-Message-State: AOJu0YyiH0cnQk9qxM1iqDzkz6+znUZWfYnhog1yJM0fn+RrSbjft1bM t6uG876ITv0slDYLDj0WnSifLLwMykCxLoRoEg4T/2hUZ80xO+tzljsPrmzSEDEqLrI5rHoTbvD LBAFQntxZdKVuLojHUUNqwC09gPW33ZuYq3OApJX7DQZ+LsRJ X-Gm-Gg: ASbGncvWvAxv6mUC+fzUtVuqI0aOPS5+g4O9ONXPXIj5bpk8mJAPfuNuOl3yZIMOhP5 xyBczOrvpdkyZw0gDLy+DSRyokXActlm9wk15GVoHKw6dW3l4IQ89MzrRqa9qBn/Ympt4QzptoL 5RAY3PQW1QP5XnJibcIVDekozQJz0UAt/kpAv0pRbD0BulAojKRkl5eTPL9xOcshOkPvBWWuTWc kQMeW56+Dc9yhczdJOpQo1S/ROgOvpTte7WTVkvC0HEvmxCXNSth8mOsO5/mce+zxa0Et2DoI6v /jtql03Zj7+9BJvR/s0NPLP9c46MIUM= X-Google-Smtp-Source: AGHT+IHMSUQChm2ODZHNSlbY2kPtq+hcH+J4hDdExeN/v+42veb6UrTGSYhbYLYnSQFVz5AVM4Ol4HrV3fQj X-Received: by 2002:a05:6a21:3943:b0:1f5:709d:e0cb with SMTP id adf61e73a8af0-203ae07e73fmr1670790637.39.1744765208867; Tue, 15 Apr 2025 18:00:08 -0700 (PDT) Received: from c7-smtp-2023.dev.purestorage.com ([2620:125:9017:12:36:3:5:0]) by smtp-relay.gmail.com with ESMTPS id d2e1a72fcca58-73bd230db52sm563876b3a.24.2025.04.15.18.00.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 15 Apr 2025 18:00:08 -0700 (PDT) X-Relaying-Domain: purestorage.com Received: from dev-ushankar.dev.purestorage.com (dev-ushankar.dev.purestorage.com [IPv6:2620:125:9007:640:7:70:36:0]) by c7-smtp-2023.dev.purestorage.com (Postfix) with ESMTP id 0BBA3340351; Tue, 15 Apr 2025 19:00:08 -0600 (MDT) Received: by dev-ushankar.dev.purestorage.com (Postfix, from userid 1557716368) id 098FCE404FA; Tue, 15 Apr 2025 19:00:08 -0600 (MDT) From: Uday Shankar Date: Tue, 15 Apr 2025 18:59:39 -0600 Subject: [PATCH v4 3/4] ublk: mark ublk_queue as const for ublk_register_io_buf Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20250415-ublk_task_per_io-v4-3-54210b91a46f@purestorage.com> References: <20250415-ublk_task_per_io-v4-0-54210b91a46f@purestorage.com> In-Reply-To: <20250415-ublk_task_per_io-v4-0-54210b91a46f@purestorage.com> To: Ming Lei , Jens Axboe , Caleb Sander Mateos Cc: linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, Uday Shankar X-Mailer: b4 0.14.2 We now allow multiple tasks to operate on I/Os belonging to the same queue concurrently. This means that any writes to ublk_queue in the I/O path are potential sources of data races. Try to prevent these by marking ublk_queue pointers as const in ublk_register_io_buf. Suggested-by: Ming Lei Signed-off-by: Uday Shankar Reviewed-by: Caleb Sander Mateos --- drivers/block/ublk_drv.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c index 153f67d92248ad45bddd2437b1306bb23df7d1ae..e2cb54895481aebaa91ab23ba05= cf26a950a642f 100644 --- a/drivers/block/ublk_drv.c +++ b/drivers/block/ublk_drv.c @@ -211,7 +211,7 @@ struct ublk_params_header { static bool ublk_abort_requests(struct ublk_device *ub, struct ublk_queue = *ubq); =20 static inline struct request *__ublk_check_and_get_req(struct ublk_device = *ub, - struct ublk_queue *ubq, int tag, size_t offset); + const struct ublk_queue *ubq, int tag, size_t offset); static inline unsigned int ublk_req_build_flags(struct request *req); static inline struct ublksrv_io_desc *ublk_get_iod(struct ublk_queue *ubq, int tag); @@ -1867,7 +1867,7 @@ static void ublk_io_release(void *priv) } =20 static int ublk_register_io_buf(struct io_uring_cmd *cmd, - struct ublk_queue *ubq, unsigned int tag, + const struct ublk_queue *ubq, unsigned int tag, unsigned int index, unsigned int issue_flags) { struct ublk_device *ub =3D cmd->file->private_data; @@ -2043,7 +2043,7 @@ static int __ublk_ch_uring_cmd(struct io_uring_cmd *c= md, } =20 static inline struct request *__ublk_check_and_get_req(struct ublk_device = *ub, - struct ublk_queue *ubq, int tag, size_t offset) + const struct ublk_queue *ubq, int tag, size_t offset) { struct request *req; =20 --=20 2.34.1 From nobody Tue Feb 10 06:29:01 2026 Received: from mail-vk1-f228.google.com (mail-vk1-f228.google.com [209.85.221.228]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 562A11FCFEF for ; Wed, 16 Apr 2025 01:00:11 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.221.228 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744765215; cv=none; b=nKBJP+kJ9+f3S2IeZWCWkwOnlKOoZOUyOseBlcct0oRWd+df6hJmVzoK4mJtT1j0r8N3pvej6tEQ9viU/TlyFxl5TwRUttoG0cyIoP84jBkiwvr6bBsPkAOEu3Y3ClH8W1pW8O1iK9Br6CDLcynL1akjGF5gjktCbySuvNTpXSM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1744765215; c=relaxed/simple; bh=ubz58Es7ldmCdO4VoHkD5rzH+upzrmrEWF5yqVCmpiM=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=PCWGi6zIgNZ3m6usk6B3dfOqHMM9Pdt3vTzw9QrvYZZ+bYYD1Mv5C/ORMIR1y8MQvRoo2+mJ6Vru0zg/ww1jIfhhiPVvBHW24+Zm0vthFsgSQNVxcrbmfn3Am4CzfBghY/eGoEA1pDN1I5nClHqP9Dn7F7zbQVbDHE/wJr9ob/4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=purestorage.com; spf=fail smtp.mailfrom=purestorage.com; dkim=pass (2048-bit key) header.d=purestorage.com header.i=@purestorage.com header.b=Ptcs/E4H; arc=none smtp.client-ip=209.85.221.228 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=purestorage.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=purestorage.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=purestorage.com header.i=@purestorage.com header.b="Ptcs/E4H" Received: by mail-vk1-f228.google.com with SMTP id 71dfb90a1353d-523f1b31cf8so2161853e0c.0 for ; Tue, 15 Apr 2025 18:00:11 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=purestorage.com; s=google2022; t=1744765210; x=1745370010; darn=vger.kernel.org; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=e/Rm8eRSTRPM7oJ5S9cVvxEnnOFl+oBtecWyVfRdd50=; b=Ptcs/E4HqfXvku3mEh5KCrBwkzxRUp7dFK4/BTYUUDfEEPt63PC9o1XESranlk/jGe zNg2PBLZOJ+Tyh9SR5HE8R6dbm1JeilDLlpdC0M44lvWU6E9i7uycE+ncZVT7u0kDl2B 5XtvXL2a1g0cOrFVAnly+4ZtdmDTvPgH8uv5ahWbPT/ym56B2nz0P7GTLFvMNTY/Eu7U 22qou+71J/+1+3HeEtgoeCXUJxCagYrr4Sj7ZsKNQPal+NKqb2+rJF9bWV1yclZZ63+a U62oOqyHZujll/JA5G5BMCnBDh4b+SNwGbek1dzNC8DXwZTFTlvgslAvcOOwV3Nmv7wt 588Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1744765210; x=1745370010; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=e/Rm8eRSTRPM7oJ5S9cVvxEnnOFl+oBtecWyVfRdd50=; b=vR3ZOMVlr0DwtkWcvY4+GRzI91lBGAedJiyutx5ibf6TOj91XfT0YUXwBTmeK4shkS S0iNQrWhOjKxaH3HB7GaOA2teUrfnQ2zLNXJs9b5E2eRWtvtjSZNSVpVVlODLUxPQskR oqmN2tKIA1GttPMPLA9z6VzZ9sQ2EZ73gShhtfx1/ASyDdg2Db7hN3gkeME31rElOY4q akuLS3LEO8D5AzWj+CKJqXrRGJdIl9GCfszlK8XksLXnnZgbo3QGK4jygUzjHLVcpkcU KnHatKDiY0j23cTAuqxbpDLQFgPHgWdRIaLWans3kkCsNIV+o4C366QzpBiYMbvqItaP Mczg== X-Forwarded-Encrypted: i=1; AJvYcCXqum/fK/Pfu+2hCJJ+zKQpOtpGJit9gIl4h2dWamk3cOAWrYV7t2YtqBBHs8tvkO21614cztNyVuHmmjg=@vger.kernel.org X-Gm-Message-State: AOJu0YyvSCPktXkA9oMLN5wk76dtidKNDbX0vMZXqNsogfaAt5Z+ddsa +wk/S4fpCoTkhYLd8f8RdMEJsHnrPUQn9OrZF5NYTrfsPDhCPb2+IaTY6UE1uEf5I6fk7TrQouO 7JUdeWydDnFjSdBxcEptK8fUu5szD4idp0tLJpoVKGQwovoie X-Gm-Gg: ASbGncsAQUw1FUEmeggCMRM2y0oe5vEejuh1XAKXDjVKAjFDdp03waB3DWwAXQxLZ0m pD2hZ2g99rT55j7Zzo2VZLpQL/KLKcXtZMFzmjS9KA7xP7JwSkbv+17FNUHDfrbCJOFsyisSlZL 7PoXACm8wl1fqi3uj1B7xhyKyEjr7BO/bJjlMES1HS6mRcubTcO5E3FNMU9RehPPBK2yWth1AVS 1T7LPRT2YCZB0K4CMUnubjWul2lBHij75RBv7DVdZzCCPZ3cCLttLGR5MCamgeWyKoHDdz9wSwA wWQJ44x1xQ+zV+AWW/kp6O6vrpAl6Tg= X-Google-Smtp-Source: AGHT+IGXQQYfjcDe2oR8+MYAyVs+81XYzD53ihDjxU44Yqa8/Drc11AX14R9kirGSEROTb85jdJuIoExG4uN X-Received: by 2002:a05:6122:1ac9:b0:526:42c2:8453 with SMTP id 71dfb90a1353d-529092d7ecemr1181678e0c.7.1744765209876; Tue, 15 Apr 2025 18:00:09 -0700 (PDT) Received: from c7-smtp-2023.dev.purestorage.com ([2620:125:9017:12:36:3:5:0]) by smtp-relay.gmail.com with ESMTPS id 71dfb90a1353d-527abd7afadsm834047e0c.4.2025.04.15.18.00.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 15 Apr 2025 18:00:09 -0700 (PDT) X-Relaying-Domain: purestorage.com Received: from dev-ushankar.dev.purestorage.com (dev-ushankar.dev.purestorage.com [IPv6:2620:125:9007:640:7:70:36:0]) by c7-smtp-2023.dev.purestorage.com (Postfix) with ESMTP id 12685340424; Tue, 15 Apr 2025 19:00:08 -0600 (MDT) Received: by dev-ushankar.dev.purestorage.com (Postfix, from userid 1557716368) id 109D6E404FA; Tue, 15 Apr 2025 19:00:08 -0600 (MDT) From: Uday Shankar Date: Tue, 15 Apr 2025 18:59:40 -0600 Subject: [PATCH v4 4/4] ublk: mark ublk_queue as const for ublk_handle_need_get_data Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20250415-ublk_task_per_io-v4-4-54210b91a46f@purestorage.com> References: <20250415-ublk_task_per_io-v4-0-54210b91a46f@purestorage.com> In-Reply-To: <20250415-ublk_task_per_io-v4-0-54210b91a46f@purestorage.com> To: Ming Lei , Jens Axboe , Caleb Sander Mateos Cc: linux-block@vger.kernel.org, linux-kernel@vger.kernel.org, Uday Shankar X-Mailer: b4 0.14.2 We now allow multiple tasks to operate on I/Os belonging to the same queue concurrently. This means that any writes to ublk_queue in the I/O path are potential sources of data races. Try to prevent these by marking ublk_queue pointers as const in ublk_handle_need_get_data. Also move a bit more of the NEED_GET_DATA-specific logic into ublk_handle_need_get_data, to make the pattern in __ublk_ch_uring_cmd more uniform. Suggested-by: Ming Lei Signed-off-by: Uday Shankar --- drivers/block/ublk_drv.c | 33 ++++++++++++++++++++------------- 1 file changed, 20 insertions(+), 13 deletions(-) diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c index e2cb54895481aebaa91ab23ba05cf26a950a642f..c8ce9349ca280b8b16040a1242a= 62b895ee01b5d 100644 --- a/drivers/block/ublk_drv.c +++ b/drivers/block/ublk_drv.c @@ -1291,7 +1291,7 @@ static void ublk_cmd_tw_cb(struct io_uring_cmd *cmd, ublk_dispatch_req(ubq, pdu->req, issue_flags); } =20 -static void ublk_queue_cmd(struct ublk_queue *ubq, struct request *rq) +static void ublk_queue_cmd(const struct ublk_queue *ubq, struct request *r= q) { struct io_uring_cmd *cmd =3D ubq->ios[rq->tag].cmd; struct ublk_uring_cmd_pdu *pdu =3D ublk_get_uring_cmd_pdu(cmd); @@ -1813,15 +1813,6 @@ static void ublk_mark_io_ready(struct ublk_device *u= b, struct ublk_queue *ubq) mutex_unlock(&ub->mutex); } =20 -static void ublk_handle_need_get_data(struct ublk_device *ub, int q_id, - int tag) -{ - struct ublk_queue *ubq =3D ublk_get_queue(ub, q_id); - struct request *req =3D blk_mq_tag_to_rq(ub->tag_set.tags[q_id], tag); - - ublk_queue_cmd(ubq, req); -} - static inline int ublk_check_cmd_op(u32 cmd_op) { u32 ioc_type =3D _IOC_TYPE(cmd_op); @@ -1933,6 +1924,21 @@ static int ublk_commit_and_fetch(const struct ublk_q= ueue *ubq, return -EIOCBQUEUED; } =20 +static int ublk_handle_need_get_data(const struct ublk_queue *ubq, + struct ublk_io *io, + struct io_uring_cmd *cmd, + const struct ublksrv_io_cmd *ub_cmd, + struct request *req) +{ + if (!(io->flags & UBLK_IO_FLAG_OWNED_BY_SRV)) + return -EINVAL; + + ublk_fill_io_cmd(io, cmd, ub_cmd->addr); + ublk_queue_cmd(ubq, req); + + return -EIOCBQUEUED; +} + static int __ublk_ch_uring_cmd(struct io_uring_cmd *cmd, unsigned int issue_flags, const struct ublksrv_io_cmd *ub_cmd) @@ -2025,10 +2031,11 @@ static int __ublk_ch_uring_cmd(struct io_uring_cmd = *cmd, goto out; break; case UBLK_IO_NEED_GET_DATA: - if (!(io->flags & UBLK_IO_FLAG_OWNED_BY_SRV)) + ret =3D ublk_handle_need_get_data( + ubq, io, cmd, ub_cmd, + blk_mq_tag_to_rq(ub->tag_set.tags[ub_cmd->q_id], tag)); + if (ret !=3D -EIOCBQUEUED) goto out; - ublk_fill_io_cmd(io, cmd, ub_cmd->addr); - ublk_handle_need_get_data(ub, ub_cmd->q_id, ub_cmd->tag); break; default: goto out; --=20 2.34.1