From nobody Sat Feb 7 06:34:13 2026 Received: from mail-lj1-f228.google.com (mail-lj1-f228.google.com [209.85.208.228]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DEAAB328B48 for ; Fri, 31 Oct 2025 20:34:39 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.208.228 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761942881; cv=none; b=dV+5n4tjL0EnyFa+hPxPsoSSRigvOodCOM+EEnzWFUmAcVV2IDk3iT1AK+2gZwDR71rE/Pa/QyWaEdnfULtzikXbg9l1sBhvWTihf9z2gh07CFLKTO87RGdz1QhBd3pREGQLXunFn95XMaMu+xvgjh2FNImW+c7AXoE8RQt1q0g= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761942881; c=relaxed/simple; bh=C6whxhY2k5fPAXSikyioIezH+fO5txoMclXOMXtXotc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=HNPsc7ZisvEenqwlciST82ql2GNeS8zbKzmAP212OxnJJW1UiF+ltgAVHOEt/T5s52LXsLOJtN2tsxrccDm48jN4hSCOdL9eOvzRkxYhhzW5PS/vH18WrFE8RXklmeAQCpEtlLiILsmX3H1D931UNIQd6RUJWExq9tcrBvJh0z0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=purestorage.com; spf=fail smtp.mailfrom=purestorage.com; dkim=pass (2048-bit key) header.d=purestorage.com header.i=@purestorage.com header.b=BHzRCFab; arc=none smtp.client-ip=209.85.208.228 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=purestorage.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=purestorage.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=purestorage.com header.i=@purestorage.com header.b="BHzRCFab" Received: by mail-lj1-f228.google.com with SMTP id 38308e7fff4ca-378ee1aebfdso4143191fa.1 for ; Fri, 31 Oct 2025 13:34:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=purestorage.com; s=google2022; t=1761942878; x=1762547678; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=RI0qy1HNIsZ7APXGZu2HBwOodYb4++mraqK0FOunMAU=; b=BHzRCFabhiAWethpTxaoampQf4oPL+XouXKbBZxEG8mFL1FqS0EF9jpZN9UQ9wgzoK OZNRYic/S9lVXkZF+GytR7vTTrI/3+WOcb/nhJ1KpaRHfMaDUXuvOe0ncwLthQxgJWHK 802pT0+3ovV/QNVTFvhTCms70QWPuaIXoyqkOTKNz4gSCc7BDLIReCo5noxgqf6fFK8f heqDvTywyL84V7XzxDYd9n/Jr+jrKlZ2YFR7+8ssV13ZkpSYevogr+pV+0FNNDU1zb86 PkTWXQisezVYIfpyBCDOeYp2of0/y/mE2LfKODuUTdZQVDVjER8KDWxli54z8W9rrP7C hQGw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1761942878; x=1762547678; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=RI0qy1HNIsZ7APXGZu2HBwOodYb4++mraqK0FOunMAU=; b=YWF0rZ/fZHIKHsWvGJ9wXk3q0MPcaT1ToatiRBUFIZFaDAUfVJNHe7CB3Oy0QGjoBt KhMYZ7WwemL2URAPQOqYTBXCVuxzeYsfAqE0WuoZm72ENBU9WCQnEK5sL4kiSXVDw2MP uVYm0EYry4INllAeWF9SrRyMJyliiWKbqwOqlspo5wTKax9bz8ouizX2UlCU1JlXHx31 gAhf2rrDMIlboyKDN4hG55OD+qhU9FAVubGC3slSTSF/wZ8oFzKBagia5j5ixP9PwVB1 Z4venzp79EG10LPFJbJg1kk0CMI9o7raTPxOVHFkiPw47HrKPczTcl5NLbwcUslLMwt5 nZTQ== X-Forwarded-Encrypted: i=1; AJvYcCWjd4/tKQ4ZozYRWVjFl738EqeqrRsTZ5Uz5bWfmRdslDu4nmNnuI4OJ6AutkFrIRUbZVc+JzV2xtVIi0c=@vger.kernel.org X-Gm-Message-State: AOJu0YyVwPJOwM2HSVtCMTO5twBN96w4iXfOwaQMW1RGmC7J/Fpi3EAQ FVDrvesQHjzFGKW76RnliptwuIWPh/DWUDDEhUm/oZLkPI8X3BhT3sozkZU+Owr4WPuXRRugvwt BSaN8svBS0UTWXjMvZQj5y5Lu8BObfXMX1kdSLzOkM79ITw7SiJll X-Gm-Gg: ASbGncsLSddzuYc3ymYdhKQVodAdL9QdBoynC5dUBpv1bMktC60niYi8Q2CuxwesioK LV/sZcnkzWnziuNMZRC/sYxn6XvsTTgP17eLVlG1DIUOZpANNbEct0veSZqsNRJdDkrC2rzN5Ya BiXcwhIMSdS09Ws4jx5a8Sfgbb/xcJenxjS4MqxJLUCbVQNBIbl31pbVZhL4k7yp6UBQLZ6cPQR yrsZTrKv18cc4f5t9yJb05YEe5MXt7etlvHNgDEwbHguCLjisS0X1zpWuucDzKRleAUf3BGtMiy OV3qJxMcSI4GTwtYE1uCWyjgyi5j6AhOwXRt8zyxxcIJd17UqLnryRgUvC2h1lVRI3u8BLOIoKk k2ApstYdEC8XtOWgA X-Google-Smtp-Source: AGHT+IFkklfQMtwtYHctUmXUj84xAbmvWB5viNqygwZDDgD7B4MpJ/ProRXWufuferuOhixNh09lD4F7KOks X-Received: by 2002:a05:6512:4025:b0:55f:6637:78f with SMTP id 2adb3069b0e04-5941d5539bdmr914999e87.9.1761942877597; Fri, 31 Oct 2025 13:34:37 -0700 (PDT) Received: from c7-smtp-2023.dev.purestorage.com ([2620:125:9017:12:36:3:5:0]) by smtp-relay.gmail.com with ESMTPS id 2adb3069b0e04-5941f5b5c33sm282549e87.37.2025.10.31.13.34.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 31 Oct 2025 13:34:37 -0700 (PDT) X-Relaying-Domain: purestorage.com Received: from dev-csander.dev.purestorage.com (dev-csander.dev.purestorage.com [10.7.70.37]) by c7-smtp-2023.dev.purestorage.com (Postfix) with ESMTP id DFCEB341B91; Fri, 31 Oct 2025 14:34:35 -0600 (MDT) Received: by dev-csander.dev.purestorage.com (Postfix, from userid 1557716354) id DA9BEE41255; Fri, 31 Oct 2025 14:34:35 -0600 (MDT) From: Caleb Sander Mateos To: Jens Axboe , Miklos Szeredi , Ming Lei , Keith Busch , Christoph Hellwig , Sagi Grimberg , Chris Mason , David Sterba Cc: io-uring@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-block@vger.kernel.org, linux-nvme@lists.infradead.org, linux-btrfs@vger.kernel.org, linux-kernel@vger.kernel.org, Caleb Sander Mateos Subject: [PATCH v4 1/3] io_uring: only call io_should_terminate_tw() once for ctx Date: Fri, 31 Oct 2025 14:34:28 -0600 Message-ID: <20251031203430.3886957-2-csander@purestorage.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20251031203430.3886957-1-csander@purestorage.com> References: <20251031203430.3886957-1-csander@purestorage.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" io_fallback_req_func() calls io_should_terminate_tw() on each req's ctx. But since the reqs all come from the ctx's fallback_llist, req->ctx will be ctx for all of the reqs. Therefore, compute ts.cancel as io_should_terminate_tw(ctx) just once, outside the loop. Signed-off-by: Caleb Sander Mateos --- io_uring/io_uring.c | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c index 93a1cc2bf383..4e6676ac4662 100644 --- a/io_uring/io_uring.c +++ b/io_uring/io_uring.c @@ -287,14 +287,13 @@ static __cold void io_fallback_req_func(struct work_s= truct *work) struct io_kiocb *req, *tmp; struct io_tw_state ts =3D {}; =20 percpu_ref_get(&ctx->refs); mutex_lock(&ctx->uring_lock); - llist_for_each_entry_safe(req, tmp, node, io_task_work.node) { - ts.cancel =3D io_should_terminate_tw(req->ctx); + ts.cancel =3D io_should_terminate_tw(ctx); + llist_for_each_entry_safe(req, tmp, node, io_task_work.node) req->io_task_work.func(req, ts); - } io_submit_flush_completions(ctx); mutex_unlock(&ctx->uring_lock); percpu_ref_put(&ctx->refs); } =20 --=20 2.45.2 From nobody Sat Feb 7 06:34:13 2026 Received: from mail-lj1-f226.google.com (mail-lj1-f226.google.com [209.85.208.226]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 86923325726 for ; Fri, 31 Oct 2025 20:34:40 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.208.226 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761942883; cv=none; b=HLfv0g+F4E/s6tURdTmrVdHwSUnwVJSf7s6B2eqONvwQcU9PXuMgHq5Po31KiAI1YkL1GB+fqUOPbCdOaj2kLyPl0OQEbfRUeaieVir9TP9czCAfwPcLabdhkRSyl7fh6pOPIi6E7sH17LSV43U1Pix741UXv1t+yuAXuqiAy4o= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761942883; c=relaxed/simple; bh=5xCi96Wi/8C0iMkGQmzSQ15P1Ao51HlxcXEoMZua5Co=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=ErdcBq9qz25C4NBQc3u3xCXTMUBWNlQ5TwaViNSgg8PeiP9V+Nl8AczZvn0Qk1x5zJVJgusc6p8Cg5FraDbU7lzbFAGs6Ht8KbBZk8vDSDeOpMRRlunV+tERKvkvvKeVCMVifB+FP+4LjTbWqHdraZiGpvAe6kO5g2ZYx85OamY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=purestorage.com; spf=fail smtp.mailfrom=purestorage.com; dkim=pass (2048-bit key) header.d=purestorage.com header.i=@purestorage.com header.b=MWl47O7y; arc=none smtp.client-ip=209.85.208.226 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=purestorage.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=purestorage.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=purestorage.com header.i=@purestorage.com header.b="MWl47O7y" Received: by mail-lj1-f226.google.com with SMTP id 38308e7fff4ca-378ee1aebfdso4143201fa.1 for ; Fri, 31 Oct 2025 13:34:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=purestorage.com; s=google2022; t=1761942878; x=1762547678; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=rPF2gMp/CLkbmJumCkFzneZgEurALeKCpbInih+HpRI=; b=MWl47O7yOu7URO+y3PgoNe9if60QLd9ifoJAFPNqkSPpmBhbJwZY2fepGGmwi77gZ/ XlEIETMFTEymDy6Fm6cc25AAWL30RrVfjwoOtsRDAdyCcEq2AQiiF92xJBus0cf2zlvD OfitbCt1Ff8urvoFVXh3ot4DNjn5Ox6dZ7dnmleJ2CrMvBxulUzn3J47tbcV9QOrOHGc HAzRbDfxV1Zp5k6fRFFVldCbQOpV+yIT18Rl2+D0+cvk9BzLYBhdFLjNMYVis6oeRAOc bzx3zUWak+GFf53tqd46hJQCx/CVw8vj0nZ3wo0WcHPknD4PlkOhLfQTi0LgPkvoBDn/ Yz0Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1761942878; x=1762547678; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=rPF2gMp/CLkbmJumCkFzneZgEurALeKCpbInih+HpRI=; b=AxKey17P4YzfUpH7dF2JypCg5CcejNe3/Bf2jtgfBOR2tBppUd9BvTXKEUjYPZV6uY CrmxHaqROHkswjU7ZD7fvXJr/vibQWznIitkl9mYZGZkOe59s3Hzk0M9DHMfNIBw+xs9 GQT5Se0vpHMmxEJXi5MC+XRmYvOhsnoB4HGqN+YMihhxAiowouL+t2HZEt0kt+ArAa+s /G7sQArjlP5a2nJw7bhNryAl+dNpNxMbbne7mBLFA19hS++0tCtjnVzgyRX7XmXBylq8 EsF+Wso3EVu0tbYcjExpQgXBqZED11aywLiizu/97/wW01lfEBDcH/M2SMBeFGCZnk1v 69Jg== X-Forwarded-Encrypted: i=1; AJvYcCVqv4z/xloQjA9msfJOtApHly880KnCyaPIrK2WsCnfAhIOuq0z0zu70PqGZWNXHvK1GJMfYyQmdMikj34=@vger.kernel.org X-Gm-Message-State: AOJu0YxzSCZyMoVTomywrP99WtwSk1t38lX6el9oZECh7GD5CV07X79+ q46ftaeACibbxYq9VS3JP6hcR5lmLK8qF+gTLvrx8aJHPArl5GQYq8Amjbp/jry0DqNyKMVjYl6 JB+VWZLUjmxmgL3nRgUadHgI6gn3WWN4J0ZgT X-Gm-Gg: ASbGncvdB0ZhzucPKpkx8+oXjRyKKEr0lm6w+OeNhitop90jL9jRrQhPxtXJBcA13/8 iL8hPai02kWIN4mD3HFw8GXz9tn/afoptUPjC9av50RHIBp77mJWe4V8I73+pKH3/hAF/1VHgd0 Tck3AcW72+Ay8Mp0ra4tC6wTRAiT+4tXapIP/j4tQti4tYzYl39Dl63HBLsClIl9+FqVb6/lWdE qmvdZGBeEga7lnXdhDw+Fon4Ql43hQUcKBkXZHp+qQ7WKaEYyNnRU60mitKvPSwTM6hX5MHlKuG 35ewmYwTS5u6Z/u34OAV+d4mi5Z3kAGrmnCpQjVmjXg49MLpEvuxZZB/RPpFGGq70o6OVSqpQbr iLJUzQruqXO0FtcLk3ui/Z/35uqhGfxM= X-Google-Smtp-Source: AGHT+IH9ddq9FZzSrhoHLOk/g28sBbPQk2jmSsLukD+J51Emqfk7owum1B6Ubv/srpylzJsB617REd2THj41 X-Received: by 2002:a05:6512:1112:b0:592:f3c8:39d8 with SMTP id 2adb3069b0e04-5941d53a565mr945652e87.5.1761942878261; Fri, 31 Oct 2025 13:34:38 -0700 (PDT) Received: from c7-smtp-2023.dev.purestorage.com ([208.88.159.128]) by smtp-relay.gmail.com with ESMTPS id 2adb3069b0e04-5941f39da5csm286489e87.26.2025.10.31.13.34.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 31 Oct 2025 13:34:38 -0700 (PDT) X-Relaying-Domain: purestorage.com Received: from dev-csander.dev.purestorage.com (dev-csander.dev.purestorage.com [10.7.70.37]) by c7-smtp-2023.dev.purestorage.com (Postfix) with ESMTP id 86988341F2A; Fri, 31 Oct 2025 14:34:36 -0600 (MDT) Received: by dev-csander.dev.purestorage.com (Postfix, from userid 1557716354) id 83350E41255; Fri, 31 Oct 2025 14:34:36 -0600 (MDT) From: Caleb Sander Mateos To: Jens Axboe , Miklos Szeredi , Ming Lei , Keith Busch , Christoph Hellwig , Sagi Grimberg , Chris Mason , David Sterba Cc: io-uring@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-block@vger.kernel.org, linux-nvme@lists.infradead.org, linux-btrfs@vger.kernel.org, linux-kernel@vger.kernel.org, Caleb Sander Mateos Subject: [PATCH v4 2/3] io_uring: add wrapper type for io_req_tw_func_t arg Date: Fri, 31 Oct 2025 14:34:29 -0600 Message-ID: <20251031203430.3886957-3-csander@purestorage.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20251031203430.3886957-1-csander@purestorage.com> References: <20251031203430.3886957-1-csander@purestorage.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" In preparation for uring_cmd implementations to implement functions with the io_req_tw_func_t signature, introduce a wrapper struct io_tw_req to hide the struct io_kiocb * argument. The intention is for only the io_uring core to access the inner struct io_kiocb *. uring_cmd implementations should instead call a helper from io_uring/cmd.h to convert struct io_tw_req to struct io_uring_cmd *. Signed-off-by: Caleb Sander Mateos --- include/linux/io_uring_types.h | 6 +++++- io_uring/futex.c | 16 +++++++++------- io_uring/io_uring.c | 21 ++++++++++++--------- io_uring/io_uring.h | 4 ++-- io_uring/msg_ring.c | 3 ++- io_uring/notif.c | 5 +++-- io_uring/poll.c | 11 ++++++----- io_uring/poll.h | 2 +- io_uring/rw.c | 5 +++-- io_uring/rw.h | 2 +- io_uring/timeout.c | 18 +++++++++++------- io_uring/uring_cmd.c | 3 ++- io_uring/waitid.c | 7 ++++--- 13 files changed, 61 insertions(+), 42 deletions(-) diff --git a/include/linux/io_uring_types.h b/include/linux/io_uring_types.h index 25ee982eb435..f064a438ce43 100644 --- a/include/linux/io_uring_types.h +++ b/include/linux/io_uring_types.h @@ -613,11 +613,15 @@ enum { REQ_F_IMPORT_BUFFER =3D IO_REQ_FLAG(REQ_F_IMPORT_BUFFER_BIT), /* ->sqe_copy() has been called, if necessary */ REQ_F_SQE_COPIED =3D IO_REQ_FLAG(REQ_F_SQE_COPIED_BIT), }; =20 -typedef void (*io_req_tw_func_t)(struct io_kiocb *req, io_tw_token_t tw); +struct io_tw_req { + struct io_kiocb *req; +}; + +typedef void (*io_req_tw_func_t)(struct io_tw_req tw_req, io_tw_token_t tw= ); =20 struct io_task_work { struct llist_node node; io_req_tw_func_t func; }; diff --git a/io_uring/futex.c b/io_uring/futex.c index 64f3bd51c84c..4e022c76236d 100644 --- a/io_uring/futex.c +++ b/io_uring/futex.c @@ -39,28 +39,30 @@ bool io_futex_cache_init(struct io_ring_ctx *ctx) void io_futex_cache_free(struct io_ring_ctx *ctx) { io_alloc_cache_free(&ctx->futex_cache, kfree); } =20 -static void __io_futex_complete(struct io_kiocb *req, io_tw_token_t tw) +static void __io_futex_complete(struct io_tw_req tw_req, io_tw_token_t tw) { - hlist_del_init(&req->hash_node); - io_req_task_complete(req, tw); + hlist_del_init(&tw_req.req->hash_node); + io_req_task_complete(tw_req, tw); } =20 -static void io_futex_complete(struct io_kiocb *req, io_tw_token_t tw) +static void io_futex_complete(struct io_tw_req tw_req, io_tw_token_t tw) { + struct io_kiocb *req =3D tw_req.req; struct io_ring_ctx *ctx =3D req->ctx; =20 io_tw_lock(ctx, tw); io_cache_free(&ctx->futex_cache, req->async_data); io_req_async_data_clear(req, 0); - __io_futex_complete(req, tw); + __io_futex_complete(tw_req, tw); } =20 -static void io_futexv_complete(struct io_kiocb *req, io_tw_token_t tw) +static void io_futexv_complete(struct io_tw_req tw_req, io_tw_token_t tw) { + struct io_kiocb *req =3D tw_req.req; struct io_futex *iof =3D io_kiocb_to_cmd(req, struct io_futex); struct futex_vector *futexv =3D req->async_data; =20 io_tw_lock(req->ctx, tw); =20 @@ -71,11 +73,11 @@ static void io_futexv_complete(struct io_kiocb *req, io= _tw_token_t tw) if (res !=3D -1) io_req_set_res(req, res, 0); } =20 io_req_async_data_free(req); - __io_futex_complete(req, tw); + __io_futex_complete(tw_req, tw); } =20 static bool io_futexv_claim(struct io_futex *iof) { if (test_bit(0, &iof->futexv_owned) || diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c index 4e6676ac4662..01631b6ff442 100644 --- a/io_uring/io_uring.c +++ b/io_uring/io_uring.c @@ -289,11 +289,11 @@ static __cold void io_fallback_req_func(struct work_s= truct *work) =20 percpu_ref_get(&ctx->refs); mutex_lock(&ctx->uring_lock); ts.cancel =3D io_should_terminate_tw(ctx); llist_for_each_entry_safe(req, tmp, node, io_task_work.node) - req->io_task_work.func(req, ts); + req->io_task_work.func((struct io_tw_req){req}, ts); io_submit_flush_completions(ctx); mutex_unlock(&ctx->uring_lock); percpu_ref_put(&ctx->refs); } =20 @@ -537,13 +537,13 @@ static void io_queue_iowq(struct io_kiocb *req) =20 trace_io_uring_queue_async_work(req, io_wq_is_hashed(&req->work)); io_wq_enqueue(tctx->io_wq, &req->work); } =20 -static void io_req_queue_iowq_tw(struct io_kiocb *req, io_tw_token_t tw) +static void io_req_queue_iowq_tw(struct io_tw_req tw_req, io_tw_token_t tw) { - io_queue_iowq(req); + io_queue_iowq(tw_req.req); } =20 void io_req_queue_iowq(struct io_kiocb *req) { req->io_task_work.func =3D io_req_queue_iowq_tw; @@ -1164,11 +1164,11 @@ struct llist_node *io_handle_tw_list(struct llist_n= ode *node, percpu_ref_get(&ctx->refs); ts.cancel =3D io_should_terminate_tw(ctx); } INDIRECT_CALL_2(req->io_task_work.func, io_poll_task_func, io_req_rw_complete, - req, ts); + (struct io_tw_req){req}, ts); node =3D next; (*count)++; if (unlikely(need_resched())) { ctx_flush_and_put(ctx, ts); ctx =3D NULL; @@ -1387,11 +1387,11 @@ static int __io_run_local_work_loop(struct llist_no= de **node, struct llist_node *next =3D (*node)->next; struct io_kiocb *req =3D container_of(*node, struct io_kiocb, io_task_work.node); INDIRECT_CALL_2(req->io_task_work.func, io_poll_task_func, io_req_rw_complete, - req, tw); + (struct io_tw_req){req}, tw); *node =3D next; if (++ret >=3D events) break; } =20 @@ -1457,18 +1457,21 @@ static int io_run_local_work(struct io_ring_ctx *ct= x, int min_events, ret =3D __io_run_local_work(ctx, ts, min_events, max_events); mutex_unlock(&ctx->uring_lock); return ret; } =20 -static void io_req_task_cancel(struct io_kiocb *req, io_tw_token_t tw) +static void io_req_task_cancel(struct io_tw_req tw_req, io_tw_token_t tw) { + struct io_kiocb *req =3D tw_req.req; + io_tw_lock(req->ctx, tw); io_req_defer_failed(req, req->cqe.res); } =20 -void io_req_task_submit(struct io_kiocb *req, io_tw_token_t tw) +void io_req_task_submit(struct io_tw_req tw_req, io_tw_token_t tw) { + struct io_kiocb *req =3D tw_req.req; struct io_ring_ctx *ctx =3D req->ctx; =20 io_tw_lock(ctx, tw); if (unlikely(tw.cancel)) io_req_defer_failed(req, -EFAULT); @@ -1700,13 +1703,13 @@ static int io_iopoll_check(struct io_ring_ctx *ctx,= unsigned int min_events) } while (nr_events < min_events); =20 return 0; } =20 -void io_req_task_complete(struct io_kiocb *req, io_tw_token_t tw) +void io_req_task_complete(struct io_tw_req tw_req, io_tw_token_t tw) { - io_req_complete_defer(req); + io_req_complete_defer(tw_req.req); } =20 /* * After the iocb has been issued, it's safe to be found on the poll list. * Adding the kiocb to the list AFTER submission ensures that we don't diff --git a/io_uring/io_uring.h b/io_uring/io_uring.h index 44b8091c7fcd..f97356ce29d0 100644 --- a/io_uring/io_uring.h +++ b/io_uring/io_uring.h @@ -147,13 +147,13 @@ struct file *io_file_get_fixed(struct io_kiocb *req, = int fd, unsigned issue_flags); =20 void __io_req_task_work_add(struct io_kiocb *req, unsigned flags); void io_req_task_work_add_remote(struct io_kiocb *req, unsigned flags); void io_req_task_queue(struct io_kiocb *req); -void io_req_task_complete(struct io_kiocb *req, io_tw_token_t tw); +void io_req_task_complete(struct io_tw_req tw_req, io_tw_token_t tw); void io_req_task_queue_fail(struct io_kiocb *req, int ret); -void io_req_task_submit(struct io_kiocb *req, io_tw_token_t tw); +void io_req_task_submit(struct io_tw_req tw_req, io_tw_token_t tw); struct llist_node *io_handle_tw_list(struct llist_node *node, unsigned int= *count, unsigned int max_entries); struct llist_node *tctx_task_work_run(struct io_uring_task *tctx, unsigned= int max_entries, unsigned int *count); void tctx_task_work(struct callback_head *cb); __cold void io_uring_cancel_generic(bool cancel_all, struct io_sq_data *sq= d); =20 diff --git a/io_uring/msg_ring.c b/io_uring/msg_ring.c index 5e5b94236d72..7063ea7964e7 100644 --- a/io_uring/msg_ring.c +++ b/io_uring/msg_ring.c @@ -68,12 +68,13 @@ void io_msg_ring_cleanup(struct io_kiocb *req) static inline bool io_msg_need_remote(struct io_ring_ctx *target_ctx) { return target_ctx->task_complete; } =20 -static void io_msg_tw_complete(struct io_kiocb *req, io_tw_token_t tw) +static void io_msg_tw_complete(struct io_tw_req tw_req, io_tw_token_t tw) { + struct io_kiocb *req =3D tw_req.req; struct io_ring_ctx *ctx =3D req->ctx; =20 io_add_aux_cqe(ctx, req->cqe.user_data, req->cqe.res, req->cqe.flags); kfree_rcu(req, rcu_head); percpu_ref_put(&ctx->refs); diff --git a/io_uring/notif.c b/io_uring/notif.c index d8ba1165c949..9960bb2a32d5 100644 --- a/io_uring/notif.c +++ b/io_uring/notif.c @@ -9,12 +9,13 @@ #include "notif.h" #include "rsrc.h" =20 static const struct ubuf_info_ops io_ubuf_ops; =20 -static void io_notif_tw_complete(struct io_kiocb *notif, io_tw_token_t tw) +static void io_notif_tw_complete(struct io_tw_req tw_req, io_tw_token_t tw) { + struct io_kiocb *notif =3D tw_req.req; struct io_notif_data *nd =3D io_notif_to_data(notif); struct io_ring_ctx *ctx =3D notif->ctx; =20 lockdep_assert_held(&ctx->uring_lock); =20 @@ -32,11 +33,11 @@ static void io_notif_tw_complete(struct io_kiocb *notif= , io_tw_token_t tw) __io_unaccount_mem(notif->ctx->user, nd->account_pages); nd->account_pages =3D 0; } =20 nd =3D nd->next; - io_req_task_complete(notif, tw); + io_req_task_complete((struct io_tw_req){notif}, tw); } while (nd); } =20 void io_tx_ubuf_complete(struct sk_buff *skb, struct ubuf_info *uarg, bool success) diff --git a/io_uring/poll.c b/io_uring/poll.c index c403e751841a..8aa4e3a31e73 100644 --- a/io_uring/poll.c +++ b/io_uring/poll.c @@ -308,12 +308,13 @@ static int io_poll_check_events(struct io_kiocb *req,= io_tw_token_t tw) =20 io_napi_add(req); return IOU_POLL_NO_ACTION; } =20 -void io_poll_task_func(struct io_kiocb *req, io_tw_token_t tw) +void io_poll_task_func(struct io_tw_req tw_req, io_tw_token_t tw) { + struct io_kiocb *req =3D tw_req.req; int ret; =20 ret =3D io_poll_check_events(req, tw); if (ret =3D=3D IOU_POLL_NO_ACTION) { return; @@ -330,26 +331,26 @@ void io_poll_task_func(struct io_kiocb *req, io_tw_to= ken_t tw) struct io_poll *poll; =20 poll =3D io_kiocb_to_cmd(req, struct io_poll); req->cqe.res =3D mangle_poll(req->cqe.res & poll->events); } else if (ret =3D=3D IOU_POLL_REISSUE) { - io_req_task_submit(req, tw); + io_req_task_submit(tw_req, tw); return; } else if (ret !=3D IOU_POLL_REMOVE_POLL_USE_RES) { req->cqe.res =3D ret; req_set_fail(req); } =20 io_req_set_res(req, req->cqe.res, 0); - io_req_task_complete(req, tw); + io_req_task_complete(tw_req, tw); } else { io_tw_lock(req->ctx, tw); =20 if (ret =3D=3D IOU_POLL_REMOVE_POLL_USE_RES) - io_req_task_complete(req, tw); + io_req_task_complete(tw_req, tw); else if (ret =3D=3D IOU_POLL_DONE || ret =3D=3D IOU_POLL_REISSUE) - io_req_task_submit(req, tw); + io_req_task_submit(tw_req, tw); else io_req_defer_failed(req, ret); } } =20 diff --git a/io_uring/poll.h b/io_uring/poll.h index c8438286dfa0..5647c5138932 100644 --- a/io_uring/poll.h +++ b/io_uring/poll.h @@ -44,6 +44,6 @@ int io_poll_cancel(struct io_ring_ctx *ctx, struct io_can= cel_data *cd, int io_arm_apoll(struct io_kiocb *req, unsigned issue_flags, __poll_t mask= ); int io_arm_poll_handler(struct io_kiocb *req, unsigned issue_flags); bool io_poll_remove_all(struct io_ring_ctx *ctx, struct io_uring_task *tct= x, bool cancel_all); =20 -void io_poll_task_func(struct io_kiocb *req, io_tw_token_t tw); +void io_poll_task_func(struct io_tw_req tw_req, io_tw_token_t tw); diff --git a/io_uring/rw.c b/io_uring/rw.c index 5b2241a5813c..828ac4f902b4 100644 --- a/io_uring/rw.c +++ b/io_uring/rw.c @@ -562,12 +562,13 @@ static inline int io_fixup_rw_res(struct io_kiocb *re= q, long res) res +=3D io->bytes_done; } return res; } =20 -void io_req_rw_complete(struct io_kiocb *req, io_tw_token_t tw) +void io_req_rw_complete(struct io_tw_req tw_req, io_tw_token_t tw) { + struct io_kiocb *req =3D tw_req.req; struct io_rw *rw =3D io_kiocb_to_cmd(req, struct io_rw); struct kiocb *kiocb =3D &rw->kiocb; =20 if ((kiocb->ki_flags & IOCB_DIO_CALLER_COMP) && kiocb->dio_complete) { long res =3D kiocb->dio_complete(rw->kiocb.private); @@ -579,11 +580,11 @@ void io_req_rw_complete(struct io_kiocb *req, io_tw_t= oken_t tw) =20 if (req->flags & (REQ_F_BUFFER_SELECTED|REQ_F_BUFFER_RING)) req->cqe.flags |=3D io_put_kbuf(req, req->cqe.res, NULL); =20 io_req_rw_cleanup(req, 0); - io_req_task_complete(req, tw); + io_req_task_complete(tw_req, tw); } =20 static void io_complete_rw(struct kiocb *kiocb, long res) { struct io_rw *rw =3D container_of(kiocb, struct io_rw, kiocb); diff --git a/io_uring/rw.h b/io_uring/rw.h index 129a53fe5482..9bd7fbf70ea9 100644 --- a/io_uring/rw.h +++ b/io_uring/rw.h @@ -44,9 +44,9 @@ int io_read(struct io_kiocb *req, unsigned int issue_flag= s); int io_write(struct io_kiocb *req, unsigned int issue_flags); int io_read_fixed(struct io_kiocb *req, unsigned int issue_flags); int io_write_fixed(struct io_kiocb *req, unsigned int issue_flags); void io_readv_writev_cleanup(struct io_kiocb *req); void io_rw_fail(struct io_kiocb *req); -void io_req_rw_complete(struct io_kiocb *req, io_tw_token_t tw); +void io_req_rw_complete(struct io_tw_req tw_req, io_tw_token_t tw); int io_read_mshot_prep(struct io_kiocb *req, const struct io_uring_sqe *sq= e); int io_read_mshot(struct io_kiocb *req, unsigned int issue_flags); void io_rw_cache_free(const void *entry); diff --git a/io_uring/timeout.c b/io_uring/timeout.c index 444142ba9d04..d8fbbaf31cf3 100644 --- a/io_uring/timeout.c +++ b/io_uring/timeout.c @@ -66,12 +66,13 @@ static inline bool io_timeout_finish(struct io_timeout = *timeout, return true; } =20 static enum hrtimer_restart io_timeout_fn(struct hrtimer *timer); =20 -static void io_timeout_complete(struct io_kiocb *req, io_tw_token_t tw) +static void io_timeout_complete(struct io_tw_req tw_req, io_tw_token_t tw) { + struct io_kiocb *req =3D tw_req.req; struct io_timeout *timeout =3D io_kiocb_to_cmd(req, struct io_timeout); struct io_timeout_data *data =3D req->async_data; struct io_ring_ctx *ctx =3D req->ctx; =20 if (!io_timeout_finish(timeout, data)) { @@ -83,11 +84,11 @@ static void io_timeout_complete(struct io_kiocb *req, i= o_tw_token_t tw) raw_spin_unlock_irq(&ctx->timeout_lock); return; } } =20 - io_req_task_complete(req, tw); + io_req_task_complete(tw_req, tw); } =20 static __cold bool io_flush_killed_timeouts(struct list_head *list, int er= r) { if (list_empty(list)) @@ -155,22 +156,24 @@ __cold void io_flush_timeouts(struct io_ring_ctx *ctx) ctx->cq_last_tm_flush =3D seq; raw_spin_unlock_irq(&ctx->timeout_lock); io_flush_killed_timeouts(&list, 0); } =20 -static void io_req_tw_fail_links(struct io_kiocb *link, io_tw_token_t tw) +static void io_req_tw_fail_links(struct io_tw_req tw_req, io_tw_token_t tw) { + struct io_kiocb *link =3D tw_req.req; + io_tw_lock(link->ctx, tw); while (link) { struct io_kiocb *nxt =3D link->link; long res =3D -ECANCELED; =20 if (link->flags & REQ_F_FAIL) res =3D link->cqe.res; link->link =3D NULL; io_req_set_res(link, res, 0); - io_req_task_complete(link, tw); + io_req_task_complete((struct io_tw_req){link}, tw); link =3D nxt; } } =20 static void io_fail_links(struct io_kiocb *req) @@ -315,12 +318,13 @@ int io_timeout_cancel(struct io_ring_ctx *ctx, struct= io_cancel_data *cd) return PTR_ERR(req); io_req_task_queue_fail(req, -ECANCELED); return 0; } =20 -static void io_req_task_link_timeout(struct io_kiocb *req, io_tw_token_t t= w) +static void io_req_task_link_timeout(struct io_tw_req tw_req, io_tw_token_= t tw) { + struct io_kiocb *req =3D tw_req.req; struct io_timeout *timeout =3D io_kiocb_to_cmd(req, struct io_timeout); struct io_kiocb *prev =3D timeout->prev; int ret; =20 if (prev) { @@ -333,15 +337,15 @@ static void io_req_task_link_timeout(struct io_kiocb = *req, io_tw_token_t tw) ret =3D io_try_cancel(req->tctx, &cd, 0); } else { ret =3D -ECANCELED; } io_req_set_res(req, ret ?: -ETIME, 0); - io_req_task_complete(req, tw); + io_req_task_complete(tw_req, tw); io_put_req(prev); } else { io_req_set_res(req, -ETIME, 0); - io_req_task_complete(req, tw); + io_req_task_complete(tw_req, tw); } } =20 static enum hrtimer_restart io_link_timeout_fn(struct hrtimer *timer) { diff --git a/io_uring/uring_cmd.c b/io_uring/uring_cmd.c index 9d67a2a721aa..c09b99e91c86 100644 --- a/io_uring/uring_cmd.c +++ b/io_uring/uring_cmd.c @@ -111,12 +111,13 @@ void io_uring_cmd_mark_cancelable(struct io_uring_cmd= *cmd, io_ring_submit_unlock(ctx, issue_flags); } } EXPORT_SYMBOL_GPL(io_uring_cmd_mark_cancelable); =20 -static void io_uring_cmd_work(struct io_kiocb *req, io_tw_token_t tw) +static void io_uring_cmd_work(struct io_tw_req tw_req, io_tw_token_t tw) { + struct io_kiocb *req =3D tw_req.req; struct io_uring_cmd *ioucmd =3D io_kiocb_to_cmd(req, struct io_uring_cmd); unsigned int flags =3D IO_URING_F_COMPLETE_DEFER; =20 if (unlikely(tw.cancel)) flags |=3D IO_URING_F_TASK_DEAD; diff --git a/io_uring/waitid.c b/io_uring/waitid.c index c5e0d979903a..62f7f1f004a5 100644 --- a/io_uring/waitid.c +++ b/io_uring/waitid.c @@ -14,11 +14,11 @@ #include "io_uring.h" #include "cancel.h" #include "waitid.h" #include "../kernel/exit.h" =20 -static void io_waitid_cb(struct io_kiocb *req, io_tw_token_t tw); +static void io_waitid_cb(struct io_tw_req tw_req, io_tw_token_t tw); =20 #define IO_WAITID_CANCEL_FLAG BIT(31) #define IO_WAITID_REF_MASK GENMASK(30, 0) =20 struct io_waitid { @@ -192,12 +192,13 @@ static inline bool io_waitid_drop_issue_ref(struct io= _kiocb *req) req->io_task_work.func =3D io_waitid_cb; io_req_task_work_add(req); return true; } =20 -static void io_waitid_cb(struct io_kiocb *req, io_tw_token_t tw) +static void io_waitid_cb(struct io_tw_req tw_req, io_tw_token_t tw) { + struct io_kiocb *req =3D tw_req.req; struct io_waitid_async *iwa =3D req->async_data; struct io_ring_ctx *ctx =3D req->ctx; int ret; =20 io_tw_lock(ctx, tw); @@ -227,11 +228,11 @@ static void io_waitid_cb(struct io_kiocb *req, io_tw_= token_t tw) /* fall through to complete, will kill waitqueue */ } } =20 io_waitid_complete(req, ret); - io_req_task_complete(req, tw); + io_req_task_complete(tw_req, tw); } =20 static int io_waitid_wait(struct wait_queue_entry *wait, unsigned mode, int sync, void *key) { --=20 2.45.2 From nobody Sat Feb 7 06:34:13 2026 Received: from mail-lj1-f227.google.com (mail-lj1-f227.google.com [209.85.208.227]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0D7AE328B72 for ; Fri, 31 Oct 2025 20:34:40 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.208.227 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761942883; cv=none; b=nrIkWdUZKRbi+pdjxkC/cxOT1/2XEtlq93qb+fsV3VZYC76hKUdYUobpH0Utjt+jWKu0eiV9vlIbXCGzFbwfsE+iAXDz8vVJmAriNfBxVdTIqtmEPiHUP7Pg4CeOnjURKh9Zx/2b0nrFL2K2k01pyNxKh1MvjLIXece7bSGSNpk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761942883; c=relaxed/simple; bh=72tFK8pZOMrZopPwBGdVPvV4blvZaCJQpmB0/pwMsu4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=BtOt4DRaV4aaEKonPN6G9Vkg8tbC6ue1FNtm+5pf9309Opa+62YFSULkBneziKaaD+a9WK8x7v670rTEzH1rvuGxZkUclCVmj17kgKuRhYD6VE9SCekM6qJ7y6KXxSZbKNBnLM2GBpf2uKb3SAE47HyvyHyS96ewADf0pjotmY0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=purestorage.com; spf=fail smtp.mailfrom=purestorage.com; dkim=pass (2048-bit key) header.d=purestorage.com header.i=@purestorage.com header.b=KWgihmy9; arc=none smtp.client-ip=209.85.208.227 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=purestorage.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=purestorage.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=purestorage.com header.i=@purestorage.com header.b="KWgihmy9" Received: by mail-lj1-f227.google.com with SMTP id 38308e7fff4ca-3727280ce17so5379411fa.0 for ; Fri, 31 Oct 2025 13:34:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=purestorage.com; s=google2022; t=1761942879; x=1762547679; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=aGUdMmY3TEOqD7WsU/6hYS9vIDX7GU4OMOjEdxKLrbA=; b=KWgihmy93YZRAaLdej9Xswa31KceqarQFU6eLRb/toJqCKAT9K10DtWDQ1XGgMbU8H WL3r3EVrYb//iydQ5uDf7Gfhwy6iTnr9/uM4gS00HYrHl3Zdd1c25KJJY4JDXDn7IlYZ lzoVv95NVLTHtRs7pBVP/upHPslonO+rdma83r6T0z3bgXoWFfk/HHb4RaphBjR9rXdY ttsGjaAlrlcdM50Oob/DL4eUdMce1je02A2FFHheA/chLXtzN9uB+YPmo4jzvJyDzJA2 tPJ8O26D0MlfAA4QP5RmIgOK2xXJAH+O8eqaNfeX4tE5cyq5Uq2j+1ygxm1I4iwan4yM +ZNw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1761942879; x=1762547679; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=aGUdMmY3TEOqD7WsU/6hYS9vIDX7GU4OMOjEdxKLrbA=; b=UfGIFhFM/D0brZSEHv3eZGDXvh9cMXuuP113/JNKkCObXjrJQMn3ZBJeTkwOXoHdgS beu+T87FsmluMWBA4WPGuYpjE/f7cF2YDKUFMQsdqxRrKrdUjn+Wqv0Opc8MnwJuiVNW vasyI4QbDhB0ubvS5hjvhEoNqbOiULpRCWbEhz3PlYqjkv1A30XacjarwBIX+jZOEa06 cHZGZn28I0St0puxt1T9z8e5dDFLYYCWiwjvEX6froqFXQm0IYpGKOEO8Xua8KavlnCH yYg//g1n21k4G2ZCq3AcCZA6INJ7QBQOenQFUaAFAgFULe6VPWdMQOTL4d88qWUyt0Yu 6AXA== X-Forwarded-Encrypted: i=1; AJvYcCVKXHX64YnXYO3m+Kd9+0IVm7+VnnnPmc0lJ0ttETcpESUfTfVwQ+qGQj1qoQyV2SKPPUl41wPq7RtIkCU=@vger.kernel.org X-Gm-Message-State: AOJu0Yw5KfaexAHAxELtM/SbTQcCdrQKPv/9W4B69xqxLpVwVLqXee4P KKCaWanfVQpmmwPYXIoTAnRj1gjNQu5X3l3EytFVUB+oq0926wp+V0+ygq3yc1hEmJjqPqI/RjV Q5RZPQ/GQ4jMgYlFkwEUhGO8axBJSj45gT14o X-Gm-Gg: ASbGncvpzmJp/5oJhW6vW543I5gEuW02aN/A9ZrTUIucsC5uoHHlI+y79uhq8FzyLJr x4Qs9WqZf9cvuGAKnqhJphrZ5XsYTYb4+xjQFJiLDWC00LqpldE5De3hOm5QhHMYoERUQnAZ58b 3uVnrapQSOHX3/Dg8Uu8jJtchKlAHGONhAXweeAz8v7oMMU8shQ+JIyAbIkvlc0jFCrcjCzyhsw EgYaHyO5j9taAFhEnYf/h2aebY5EHzTzyx7GQc/scTNN7vC4lJ/WGOz+FbwgRwncocGxhPlNsdY 51tJamjpXKU4HqdgJuy6ktHbSPlyZsni7F69AWxOWgc1CNPn0vAyAHBZZr5LZu9DJdmRxGzrgYb Hn78s2NVCw/g3C1QhDu09r6umbGnLDNA= X-Google-Smtp-Source: AGHT+IE4ddUYa+Cofm9rutlWedWuknMo6qVls9Yu6Q/zwqmL5WpvM6QFyUl7E64z9L4ozGx1ISPYx081U18s X-Received: by 2002:a05:6512:4025:b0:55f:433b:e766 with SMTP id 2adb3069b0e04-5941d563f10mr1047588e87.7.1761942878802; Fri, 31 Oct 2025 13:34:38 -0700 (PDT) Received: from c7-smtp-2023.dev.purestorage.com ([208.88.159.128]) by smtp-relay.gmail.com with ESMTPS id 2adb3069b0e04-5941f5bbcc0sm277197e87.42.2025.10.31.13.34.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 31 Oct 2025 13:34:38 -0700 (PDT) X-Relaying-Domain: purestorage.com Received: from dev-csander.dev.purestorage.com (dev-csander.dev.purestorage.com [10.7.70.37]) by c7-smtp-2023.dev.purestorage.com (Postfix) with ESMTP id 208ED3407E5; Fri, 31 Oct 2025 14:34:37 -0600 (MDT) Received: by dev-csander.dev.purestorage.com (Postfix, from userid 1557716354) id 1CF7DE41255; Fri, 31 Oct 2025 14:34:37 -0600 (MDT) From: Caleb Sander Mateos To: Jens Axboe , Miklos Szeredi , Ming Lei , Keith Busch , Christoph Hellwig , Sagi Grimberg , Chris Mason , David Sterba Cc: io-uring@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-block@vger.kernel.org, linux-nvme@lists.infradead.org, linux-btrfs@vger.kernel.org, linux-kernel@vger.kernel.org, Caleb Sander Mateos Subject: [PATCH v4 3/3] io_uring/uring_cmd: avoid double indirect call in task work dispatch Date: Fri, 31 Oct 2025 14:34:30 -0600 Message-ID: <20251031203430.3886957-4-csander@purestorage.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20251031203430.3886957-1-csander@purestorage.com> References: <20251031203430.3886957-1-csander@purestorage.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" io_uring task work dispatch makes an indirect call to struct io_kiocb's io_task_work.func field to allow running arbitrary task work functions. In the uring_cmd case, this calls io_uring_cmd_work(), which immediately makes another indirect call to struct io_uring_cmd's task_work_cb field. Change the uring_cmd task work callbacks to functions whose signatures match io_req_tw_func_t. Add a function io_uring_cmd_from_tw() to convert from the task work's struct io_tw_req argument to struct io_uring_cmd *. Define a constant IO_URING_CMD_TASK_WORK_ISSUE_FLAGS to avoid manufacturing issue_flags in the uring_cmd task work callbacks. Now uring_cmd task work dispatch makes a single indirect call to the uring_cmd implementation's callback. This also allows removing the task_work_cb field from struct io_uring_cmd, freeing up 8 bytes for future storage. Since fuse_uring_send_in_task() now has access to the io_tw_token_t, check its cancel field directly instead of relying on the IO_URING_F_TASK_DEAD issue flag. Signed-off-by: Caleb Sander Mateos Reviewed-by: Christoph Hellwig --- block/ioctl.c | 6 ++++-- drivers/block/ublk_drv.c | 22 +++++++++++----------- drivers/nvme/host/ioctl.c | 7 ++++--- fs/btrfs/ioctl.c | 5 +++-- fs/fuse/dev_uring.c | 7 ++++--- include/linux/io_uring/cmd.h | 22 +++++++++++++--------- include/linux/io_uring_types.h | 1 - io_uring/uring_cmd.c | 18 ++---------------- 8 files changed, 41 insertions(+), 47 deletions(-) diff --git a/block/ioctl.c b/block/ioctl.c index d7489a56b33c..4ed17c5a4acc 100644 --- a/block/ioctl.c +++ b/block/ioctl.c @@ -767,18 +767,20 @@ long compat_blkdev_ioctl(struct file *file, unsigned = cmd, unsigned long arg) struct blk_iou_cmd { int res; bool nowait; }; =20 -static void blk_cmd_complete(struct io_uring_cmd *cmd, unsigned int issue_= flags) +static void blk_cmd_complete(struct io_tw_req tw_req, io_tw_token_t tw) { + struct io_uring_cmd *cmd =3D io_uring_cmd_from_tw(tw_req); struct blk_iou_cmd *bic =3D io_uring_cmd_to_pdu(cmd, struct blk_iou_cmd); =20 if (bic->res =3D=3D -EAGAIN && bic->nowait) io_uring_cmd_issue_blocking(cmd); else - io_uring_cmd_done(cmd, bic->res, issue_flags); + io_uring_cmd_done(cmd, bic->res, + IO_URING_CMD_TASK_WORK_ISSUE_FLAGS); } =20 static void bio_cmd_bio_end_io(struct bio *bio) { struct io_uring_cmd *cmd =3D bio->bi_private; diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c index 0c74a41a6753..e0c601128efa 100644 --- a/drivers/block/ublk_drv.c +++ b/drivers/block/ublk_drv.c @@ -1300,14 +1300,13 @@ static bool ublk_start_io(const struct ublk_queue *= ubq, struct request *req, } =20 return true; } =20 -static void ublk_dispatch_req(struct ublk_queue *ubq, - struct request *req, - unsigned int issue_flags) +static void ublk_dispatch_req(struct ublk_queue *ubq, struct request *req) { + unsigned int issue_flags =3D IO_URING_CMD_TASK_WORK_ISSUE_FLAGS; int tag =3D req->tag; struct ublk_io *io =3D &ubq->ios[tag]; =20 pr_devel("%s: complete: qid %d tag %d io_flags %x addr %llx\n", __func__, ubq->q_id, req->tag, io->flags, @@ -1346,17 +1345,17 @@ static void ublk_dispatch_req(struct ublk_queue *ub= q, =20 if (ublk_prep_auto_buf_reg(ubq, req, io, issue_flags)) ublk_complete_io_cmd(io, req, UBLK_IO_RES_OK, issue_flags); } =20 -static void ublk_cmd_tw_cb(struct io_uring_cmd *cmd, - unsigned int issue_flags) +static void ublk_cmd_tw_cb(struct io_tw_req tw_req, io_tw_token_t tw) { + struct io_uring_cmd *cmd =3D io_uring_cmd_from_tw(tw_req); struct ublk_uring_cmd_pdu *pdu =3D ublk_get_uring_cmd_pdu(cmd); struct ublk_queue *ubq =3D pdu->ubq; =20 - ublk_dispatch_req(ubq, pdu->req, issue_flags); + ublk_dispatch_req(ubq, pdu->req); } =20 static void ublk_queue_cmd(struct ublk_queue *ubq, struct request *rq) { struct io_uring_cmd *cmd =3D ubq->ios[rq->tag].cmd; @@ -1364,21 +1363,21 @@ static void ublk_queue_cmd(struct ublk_queue *ubq, = struct request *rq) =20 pdu->req =3D rq; io_uring_cmd_complete_in_task(cmd, ublk_cmd_tw_cb); } =20 -static void ublk_cmd_list_tw_cb(struct io_uring_cmd *cmd, - unsigned int issue_flags) +static void ublk_cmd_list_tw_cb(struct io_tw_req tw_req, io_tw_token_t tw) { + struct io_uring_cmd *cmd =3D io_uring_cmd_from_tw(tw_req); struct ublk_uring_cmd_pdu *pdu =3D ublk_get_uring_cmd_pdu(cmd); struct request *rq =3D pdu->req_list; struct request *next; =20 do { next =3D rq->rq_next; rq->rq_next =3D NULL; - ublk_dispatch_req(rq->mq_hctx->driver_data, rq, issue_flags); + ublk_dispatch_req(rq->mq_hctx->driver_data, rq); rq =3D next; } while (rq); } =20 static void ublk_queue_cmd_list(struct ublk_io *io, struct rq_list *l) @@ -2521,13 +2520,14 @@ static inline struct request *__ublk_check_and_get_= req(struct ublk_device *ub, fail_put: ublk_put_req_ref(io, req); return NULL; } =20 -static void ublk_ch_uring_cmd_cb(struct io_uring_cmd *cmd, - unsigned int issue_flags) +static void ublk_ch_uring_cmd_cb(struct io_tw_req tw_req, io_tw_token_t tw) { + unsigned int issue_flags =3D IO_URING_CMD_TASK_WORK_ISSUE_FLAGS; + struct io_uring_cmd *cmd =3D io_uring_cmd_from_tw(tw_req); int ret =3D ublk_ch_uring_cmd_local(cmd, issue_flags); =20 if (ret !=3D -EIOCBQUEUED) io_uring_cmd_done(cmd, ret, issue_flags); } diff --git a/drivers/nvme/host/ioctl.c b/drivers/nvme/host/ioctl.c index c212fa952c0f..4fa8400a5627 100644 --- a/drivers/nvme/host/ioctl.c +++ b/drivers/nvme/host/ioctl.c @@ -396,18 +396,19 @@ static inline struct nvme_uring_cmd_pdu *nvme_uring_c= md_pdu( struct io_uring_cmd *ioucmd) { return io_uring_cmd_to_pdu(ioucmd, struct nvme_uring_cmd_pdu); } =20 -static void nvme_uring_task_cb(struct io_uring_cmd *ioucmd, - unsigned issue_flags) +static void nvme_uring_task_cb(struct io_tw_req tw_req, io_tw_token_t tw) { + struct io_uring_cmd *ioucmd =3D io_uring_cmd_from_tw(tw_req); struct nvme_uring_cmd_pdu *pdu =3D nvme_uring_cmd_pdu(ioucmd); =20 if (pdu->bio) blk_rq_unmap_user(pdu->bio); - io_uring_cmd_done32(ioucmd, pdu->status, pdu->result, issue_flags); + io_uring_cmd_done32(ioucmd, pdu->status, pdu->result, + IO_URING_CMD_TASK_WORK_ISSUE_FLAGS); } =20 static enum rq_end_io_ret nvme_uring_cmd_end_io(struct request *req, blk_status_t err) { diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c index 8cb7d5a462ef..3171d9df0246 100644 --- a/fs/btrfs/ioctl.c +++ b/fs/btrfs/ioctl.c @@ -4647,12 +4647,13 @@ struct btrfs_uring_priv { struct io_btrfs_cmd { struct btrfs_uring_encoded_data *data; struct btrfs_uring_priv *priv; }; =20 -static void btrfs_uring_read_finished(struct io_uring_cmd *cmd, unsigned i= nt issue_flags) +static void btrfs_uring_read_finished(struct io_tw_req tw_req, io_tw_token= _t tw) { + struct io_uring_cmd *cmd =3D io_uring_cmd_from_tw(tw_req); struct io_btrfs_cmd *bc =3D io_uring_cmd_to_pdu(cmd, struct io_btrfs_cmd); struct btrfs_uring_priv *priv =3D bc->priv; struct btrfs_inode *inode =3D BTRFS_I(file_inode(priv->iocb.ki_filp)); struct extent_io_tree *io_tree =3D &inode->io_tree; pgoff_t index; @@ -4693,11 +4694,11 @@ static void btrfs_uring_read_finished(struct io_uri= ng_cmd *cmd, unsigned int iss =20 out: btrfs_unlock_extent(io_tree, priv->start, priv->lockend, &priv->cached_st= ate); btrfs_inode_unlock(inode, BTRFS_ILOCK_SHARED); =20 - io_uring_cmd_done(cmd, ret, issue_flags); + io_uring_cmd_done(cmd, ret, IO_URING_CMD_TASK_WORK_ISSUE_FLAGS); add_rchar(current, ret); =20 for (index =3D 0; index < priv->nr_pages; index++) __free_page(priv->pages[index]); =20 diff --git a/fs/fuse/dev_uring.c b/fs/fuse/dev_uring.c index f6b12aebb8bb..f8c93dc45768 100644 --- a/fs/fuse/dev_uring.c +++ b/fs/fuse/dev_uring.c @@ -1207,18 +1207,19 @@ static void fuse_uring_send(struct fuse_ring_ent *e= nt, struct io_uring_cmd *cmd, /* * This prepares and sends the ring request in fuse-uring task context. * User buffers are not mapped yet - the application does not have permiss= ion * to write to it - this has to be executed in ring task context. */ -static void fuse_uring_send_in_task(struct io_uring_cmd *cmd, - unsigned int issue_flags) +static void fuse_uring_send_in_task(struct io_tw_req tw_req, io_tw_token_t= tw) { + unsigned int issue_flags =3D IO_URING_CMD_TASK_WORK_ISSUE_FLAGS; + struct io_uring_cmd *cmd =3D io_uring_cmd_from_tw(tw_req); struct fuse_ring_ent *ent =3D uring_cmd_to_ring_ent(cmd); struct fuse_ring_queue *queue =3D ent->queue; int err; =20 - if (!(issue_flags & IO_URING_F_TASK_DEAD)) { + if (!tw.cancel) { err =3D fuse_uring_prepare_send(ent, ent->fuse_req); if (err) { fuse_uring_next_fuse_req(ent, queue, issue_flags); return; } diff --git a/include/linux/io_uring/cmd.h b/include/linux/io_uring/cmd.h index 7509025b4071..375fd048c4cb 100644 --- a/include/linux/io_uring/cmd.h +++ b/include/linux/io_uring/cmd.h @@ -9,21 +9,17 @@ /* only top 8 bits of sqe->uring_cmd_flags for kernel internal use */ #define IORING_URING_CMD_CANCELABLE (1U << 30) /* io_uring_cmd is being issued again */ #define IORING_URING_CMD_REISSUE (1U << 31) =20 -typedef void (*io_uring_cmd_tw_t)(struct io_uring_cmd *cmd, - unsigned issue_flags); - struct io_uring_cmd { struct file *file; const struct io_uring_sqe *sqe; - /* callback to defer completions to task context */ - io_uring_cmd_tw_t task_work_cb; u32 cmd_op; u32 flags; u8 pdu[32]; /* available inline for free use */ + u8 unused[8]; }; =20 static inline const void *io_uring_sqe_cmd(const struct io_uring_sqe *sqe) { return sqe->cmd; @@ -58,11 +54,11 @@ int io_uring_cmd_import_fixed_vec(struct io_uring_cmd *= ioucmd, */ void __io_uring_cmd_done(struct io_uring_cmd *cmd, s32 ret, u64 res2, unsigned issue_flags, bool is_cqe32); =20 void __io_uring_cmd_do_in_task(struct io_uring_cmd *ioucmd, - io_uring_cmd_tw_t task_work_cb, + io_req_tw_func_t task_work_cb, unsigned flags); =20 /* * Note: the caller should never hard code @issue_flags and only use the * mask provided by the core io_uring code. @@ -107,11 +103,11 @@ static inline int io_uring_cmd_import_fixed_vec(struc= t io_uring_cmd *ioucmd, static inline void __io_uring_cmd_done(struct io_uring_cmd *cmd, s32 ret, u64 ret2, unsigned issue_flags, bool is_cqe32) { } static inline void __io_uring_cmd_do_in_task(struct io_uring_cmd *ioucmd, - io_uring_cmd_tw_t task_work_cb, unsigned flags) + io_req_tw_func_t task_work_cb, unsigned flags) { } static inline void io_uring_cmd_mark_cancelable(struct io_uring_cmd *cmd, unsigned int issue_flags) { @@ -130,19 +126,27 @@ static inline bool io_uring_mshot_cmd_post_cqe(struct= io_uring_cmd *ioucmd, { return true; } #endif =20 +static inline struct io_uring_cmd *io_uring_cmd_from_tw(struct io_tw_req t= w_req) +{ + return io_kiocb_to_cmd(tw_req.req, struct io_uring_cmd); +} + +/* task_work executor checks the deferred list completion */ +#define IO_URING_CMD_TASK_WORK_ISSUE_FLAGS IO_URING_F_COMPLETE_DEFER + /* users must follow the IOU_F_TWQ_LAZY_WAKE semantics */ static inline void io_uring_cmd_do_in_task_lazy(struct io_uring_cmd *ioucm= d, - io_uring_cmd_tw_t task_work_cb) + io_req_tw_func_t task_work_cb) { __io_uring_cmd_do_in_task(ioucmd, task_work_cb, IOU_F_TWQ_LAZY_WAKE); } =20 static inline void io_uring_cmd_complete_in_task(struct io_uring_cmd *iouc= md, - io_uring_cmd_tw_t task_work_cb) + io_req_tw_func_t task_work_cb) { __io_uring_cmd_do_in_task(ioucmd, task_work_cb, 0); } =20 static inline struct task_struct *io_uring_cmd_get_task(struct io_uring_cm= d *cmd) diff --git a/include/linux/io_uring_types.h b/include/linux/io_uring_types.h index f064a438ce43..92780764d5fa 100644 --- a/include/linux/io_uring_types.h +++ b/include/linux/io_uring_types.h @@ -37,11 +37,10 @@ enum io_uring_cmd_flags { IO_URING_F_IOPOLL =3D (1 << 10), =20 /* set when uring wants to cancel a previously issued command */ IO_URING_F_CANCEL =3D (1 << 11), IO_URING_F_COMPAT =3D (1 << 12), - IO_URING_F_TASK_DEAD =3D (1 << 13), }; =20 struct io_wq_work_node { struct io_wq_work_node *next; }; diff --git a/io_uring/uring_cmd.c b/io_uring/uring_cmd.c index c09b99e91c86..197474911f04 100644 --- a/io_uring/uring_cmd.c +++ b/io_uring/uring_cmd.c @@ -111,34 +111,20 @@ void io_uring_cmd_mark_cancelable(struct io_uring_cmd= *cmd, io_ring_submit_unlock(ctx, issue_flags); } } EXPORT_SYMBOL_GPL(io_uring_cmd_mark_cancelable); =20 -static void io_uring_cmd_work(struct io_tw_req tw_req, io_tw_token_t tw) -{ - struct io_kiocb *req =3D tw_req.req; - struct io_uring_cmd *ioucmd =3D io_kiocb_to_cmd(req, struct io_uring_cmd); - unsigned int flags =3D IO_URING_F_COMPLETE_DEFER; - - if (unlikely(tw.cancel)) - flags |=3D IO_URING_F_TASK_DEAD; - - /* task_work executor checks the deffered list completion */ - ioucmd->task_work_cb(ioucmd, flags); -} - void __io_uring_cmd_do_in_task(struct io_uring_cmd *ioucmd, - io_uring_cmd_tw_t task_work_cb, + io_req_tw_func_t task_work_cb, unsigned flags) { struct io_kiocb *req =3D cmd_to_io_kiocb(ioucmd); =20 if (WARN_ON_ONCE(req->flags & REQ_F_APOLL_MULTISHOT)) return; =20 - ioucmd->task_work_cb =3D task_work_cb; - req->io_task_work.func =3D io_uring_cmd_work; + req->io_task_work.func =3D task_work_cb; __io_req_task_work_add(req, flags); } EXPORT_SYMBOL_GPL(__io_uring_cmd_do_in_task); =20 static inline void io_req_set_cqe32_extra(struct io_kiocb *req, --=20 2.45.2