From nobody Sat Feb 7 06:34:11 2026 Received: from mail-ej1-f97.google.com (mail-ej1-f97.google.com [209.85.218.97]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A9A7C2C0269 for ; Mon, 27 Oct 2025 02:03:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.218.97 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761530591; cv=none; b=feIFbr5YebGwSdxM+t09Nk4hkVAXyrjE6eITQWfZ/Oc46C8n0F95Zs3WXMiWVlfNixVo9FmnpkVuHYYj2U4rRoocrGp62bhlb2NY3aLHCbdGhMO2U+DJVx2xdyJqTlvKbkuvqeV9st1XWVEaJr/77MFG/1B4A++dHQh17nqb19M= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761530591; c=relaxed/simple; bh=5e89GXt6DhKyASePnDrD7cthV1505P8u/yKg4DMfB4Q=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=T7sJCCF1fhir1/sobP+9ri6CRCWGJMrmDBHg4L0WMoYI0uSgIqHjc1Zv0ABPHdCgQ1v3s93FUdHLXyybbREq3ex1Fx9wWZlYAqF6m46UzJI3l9GN2NJJa9nbMKPB3iJbAaZv6qnRtGmELrfawcQQbevUTTst9x56rix9KW+2V2c= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=purestorage.com; spf=fail smtp.mailfrom=purestorage.com; dkim=pass (2048-bit key) header.d=purestorage.com header.i=@purestorage.com header.b=CI4KikoU; arc=none smtp.client-ip=209.85.218.97 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=purestorage.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=purestorage.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=purestorage.com header.i=@purestorage.com header.b="CI4KikoU" Received: by mail-ej1-f97.google.com with SMTP id a640c23a62f3a-b6d3f38416dso89035766b.3 for ; Sun, 26 Oct 2025 19:03:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=purestorage.com; s=google2022; t=1761530587; x=1762135387; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=96BiLrpc0TjSBBUxSZ5BCVjSp2BMxOcWJzQbIZwxLNw=; b=CI4KikoUMv2kLig1ndQImQ4XDPLt6YUXo3EKj+TA37dW4Hl1jUQcG4yK/soiMR/Twb OGNPYd11RrSV7u7Vnbzdp1BKP3+ifcbi+6VhsFmFOlATUW/d3DbIXWCmG7CwicpYsbVY Jr7bKDB9FHV3rws2u2t6oDhpnGoeWkQwm/Bya0+pRmf0sKJddJMqrT7j6EU///so4BvX CQziT6FS8AuGAxTc7EaCVF7zScilXF7LB64WmC3yWZiw4nRZqRgYkNBp3s/V73onlQK0 izlq9nsLZdSfT6MA1HIYLi8Q23W4R5UmWlJqC8oLX8TSjBN2EZdt3fDi0mtKCdnQ6Pdm AKUQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1761530587; x=1762135387; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=96BiLrpc0TjSBBUxSZ5BCVjSp2BMxOcWJzQbIZwxLNw=; b=Jofjw6SDO5yLWAS85Dlzzwqk78KQ8arivvVsaoUiXn9+eumJc3BFkuK/5Cd1n1hhAZ 6XBFfgWPx7ey9V+ZC1CSnWU3eJbC+rNNhfatvzyvDmmskFjP2rUiFuHkz/oFPjP5GnjG 2DX/w/kV7KNVzoqmL6QsDtbJohAaUzEW6UneMk8PnGJQ+lh8/usnlSg1HHWndgi73yJC UI0F86IiuNpluqDepbh8wahFt4VxbY1iBldcRjb3ykNm+41Nkz0nB0cSaXYRM3mn+9Bc AKDQ0gDP4mkuE9r9v1OjE0Y1EHIDIAngs18oSvuRBoLBVZUZ2zbVwDR+vV8PO2ii/uCh 98UA== X-Forwarded-Encrypted: i=1; AJvYcCXh4NngEPr+GntzLnTSFrQirebnu1ovfednWLt0p+MX2dLtEfGbL81RlJXSFZYoVqFTksFfsT83pp4RfbM=@vger.kernel.org X-Gm-Message-State: AOJu0Yx+rdxl8yFvhuSgmwlIGtcmFQRl6UBuw/TBHbkGdYoGtYQydh/G aen4B+QawO9Mrh69x5dNcfSdZ4cx+tw/RLhmrvhJyRAVLxaMARADx0NdCNh+n1HxMizABUnsxf0 GsuEAcbRSLeQJ9M/xCJ0JZVfvCNzvdWwmQrDi X-Gm-Gg: ASbGncsrw5dPXTUdDOzWwq84dAswbk/bCslqxHbltNyRCVsdGo3eJFTUgGEYG0/5B+G VrpkAuziAf/7pcstY7WjekOib9hLe/0i6oQMsG3PbuHhZXq+WDEYhP+XEtzfBIr4j0LiI9UsgIZ mS8BvNj/p6iH7Km6L5MSg+vsSoiiVd2hK50KckbG6cgSeNuMlLZc74LxqpBDL8Rnt3HjE5516R1 xAU6qnLrdabvobQPfmV7LPfSa5q7Ezn3i2zy1udAjXoHLSCvHt3SxhshvktmTV1UxNnWGYvUZJX 99CoFKEKgui964GvzmremBd4HaOzCIt8Xdk8O5p0vDBiyoT8IOyp8FoYhIfxfd96VqvfW3CbGpM bfeM+b4fFNHUihKYi0dVmqJAvu9vASm4= X-Google-Smtp-Source: AGHT+IGMoMb/41FNBYew5aW4eybeQmJ4NZaYT3SIiYCnt1CmEPvxyf464JNfGnI+afMOqCffpcuFgIb+lJb3 X-Received: by 2002:a17:906:fd84:b0:b2b:c145:ab8a with SMTP id a640c23a62f3a-b6c75d2bbb1mr1544924966b.3.1761530587070; Sun, 26 Oct 2025 19:03:07 -0700 (PDT) Received: from c7-smtp-2023.dev.purestorage.com ([208.88.159.129]) by smtp-relay.gmail.com with ESMTPS id a640c23a62f3a-b6d85462057sm55892766b.72.2025.10.26.19.03.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 26 Oct 2025 19:03:07 -0700 (PDT) X-Relaying-Domain: purestorage.com Received: from dev-csander.dev.purestorage.com (unknown [IPv6:2620:125:9007:640:ffff::1199]) by c7-smtp-2023.dev.purestorage.com (Postfix) with ESMTP id B192C340670; Sun, 26 Oct 2025 20:03:05 -0600 (MDT) Received: by dev-csander.dev.purestorage.com (Postfix, from userid 1557716354) id AEC5DE46586; Sun, 26 Oct 2025 20:03:05 -0600 (MDT) From: Caleb Sander Mateos To: Jens Axboe , Miklos Szeredi , Ming Lei , Keith Busch , Christoph Hellwig , Sagi Grimberg , Chris Mason , David Sterba Cc: io-uring@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-block@vger.kernel.org, linux-nvme@lists.infradead.org, linux-btrfs@vger.kernel.org, linux-kernel@vger.kernel.org, Caleb Sander Mateos Subject: [PATCH v3 1/4] io_uring: expose io_should_terminate_tw() Date: Sun, 26 Oct 2025 20:02:59 -0600 Message-ID: <20251027020302.822544-2-csander@purestorage.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20251027020302.822544-1-csander@purestorage.com> References: <20251027020302.822544-1-csander@purestorage.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" A subsequent commit will call io_should_terminate_tw() from an inline function in include/linux/io_uring/cmd.h, so move it from an io_uring internal header to include/linux/io_uring.h. Callers outside io_uring should not call it directly. Signed-off-by: Caleb Sander Mateos --- include/linux/io_uring.h | 14 ++++++++++++++ io_uring/io_uring.h | 13 ------------- 2 files changed, 14 insertions(+), 13 deletions(-) diff --git a/include/linux/io_uring.h b/include/linux/io_uring.h index 85fe4e6b275c..c2a12287b821 100644 --- a/include/linux/io_uring.h +++ b/include/linux/io_uring.h @@ -1,13 +1,27 @@ /* SPDX-License-Identifier: GPL-2.0-or-later */ #ifndef _LINUX_IO_URING_H #define _LINUX_IO_URING_H =20 +#include #include #include #include =20 +/* + * Terminate the request if either of these conditions are true: + * + * 1) It's being executed by the original task, but that task is marked + * with PF_EXITING as it's exiting. + * 2) PF_KTHREAD is set, in which case the invoker of the task_work is + * our fallback task_work. + */ +static inline bool io_should_terminate_tw(struct io_ring_ctx *ctx) +{ + return (current->flags & (PF_KTHREAD | PF_EXITING)) || percpu_ref_is_dyin= g(&ctx->refs); +} + #if defined(CONFIG_IO_URING) void __io_uring_cancel(bool cancel_all); void __io_uring_free(struct task_struct *tsk); void io_uring_unreg_ringfd(void); const char *io_uring_get_opcode(u8 opcode); diff --git a/io_uring/io_uring.h b/io_uring/io_uring.h index 46d9141d772a..78777bf1ea4b 100644 --- a/io_uring/io_uring.h +++ b/io_uring/io_uring.h @@ -556,23 +556,10 @@ static inline bool io_allowed_run_tw(struct io_ring_c= tx *ctx) { return likely(!(ctx->flags & IORING_SETUP_DEFER_TASKRUN) || ctx->submitter_task =3D=3D current); } =20 -/* - * Terminate the request if either of these conditions are true: - * - * 1) It's being executed by the original task, but that task is marked - * with PF_EXITING as it's exiting. - * 2) PF_KTHREAD is set, in which case the invoker of the task_work is - * our fallback task_work. - */ -static inline bool io_should_terminate_tw(struct io_ring_ctx *ctx) -{ - return (current->flags & (PF_KTHREAD | PF_EXITING)) || percpu_ref_is_dyin= g(&ctx->refs); -} - static inline void io_req_queue_tw_complete(struct io_kiocb *req, s32 res) { io_req_set_res(req, res, 0); req->io_task_work.func =3D io_req_task_complete; io_req_task_work_add(req); --=20 2.45.2 From nobody Sat Feb 7 06:34:11 2026 Received: from mail-wm1-f98.google.com (mail-wm1-f98.google.com [209.85.128.98]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 556B72C08D1 for ; Mon, 27 Oct 2025 02:03:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.98 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761530592; cv=none; b=Ds2KKnGKMpErgWyUG5EefQVarifpN2PigjOOBCya49HBPSwYXatB0GRiC8hK6HrKeHwOGUyDuDs2Y0BhpfN5upDLWMAVdKC/B0TGbdWtn34Z6SJwgjbw8gGrGSAATOsh7C34dC6BA6RuEhk66KVeTD/KhWCrBuG3XaQO62eBXCg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761530592; c=relaxed/simple; bh=1OFERuWCzicdiztTF/q7QW5F9IVu0Slo/TPxet3gSrc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=IjuEOvDJVaytLl1VuuH2bJ7OqluLj4CDe8Ha6hvl+0nKyARPqILbuFtoTQrIUw//yWbvm3FKYvI4w6kD8o4vsSYJg5tAxmKiOmNPEq8WHp/+M7IjhNB4umK2+eP1LeWPRm++tJnc1cwrXivqf3I1jd9xGoW4eOggJ+EaBvaUfBY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=purestorage.com; spf=fail smtp.mailfrom=purestorage.com; dkim=pass (2048-bit key) header.d=purestorage.com header.i=@purestorage.com header.b=R2w23gKe; arc=none smtp.client-ip=209.85.128.98 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=purestorage.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=purestorage.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=purestorage.com header.i=@purestorage.com header.b="R2w23gKe" Received: by mail-wm1-f98.google.com with SMTP id 5b1f17b1804b1-471810a77c1so4476295e9.2 for ; Sun, 26 Oct 2025 19:03:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=purestorage.com; s=google2022; t=1761530589; x=1762135389; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=QYyTEOzXOC/G6qb7jIG39TqesYubyS/P5gHR3qDN/K4=; b=R2w23gKebBpUsSrkjs9qb+uA933kY17ORq1QiEZ5dqv+HqNaKr6LPIUvwfCzoO4xTf y2kPW4HujgD2GSJ8Js/FS3z5VFkQrma6RhS5N6Cyv7Y7E5QUJY/fzmXXBDhpUAyKzyh/ L7ur65v+m/RlBdlox4CsmRbvVTDvVtbwyPe/dOc41TqLGfeUQSD+cmzFLHmCerNa+cly CGSis1jcmGt4WRJWWIP/HgchMqOJ7JuTGiXnLG5ENcUbnTPcDzD1zGAuAxwjgIM5LXAS Oawgzp5gHr+7ia87D5X+B2NlnylMHQKxDBjC4S51RCr+vySGBnOCDpkvNdeKYqk1vDWQ k2kw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1761530589; x=1762135389; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=QYyTEOzXOC/G6qb7jIG39TqesYubyS/P5gHR3qDN/K4=; b=By5+UBXO3okz7r6AxtpimOAlp82XS9m+i08Vz89iC/XEz4V6FidOf+O80PPTisfoNU G88Q4CqRvFwySd6uzFmszB8CZve6qBJDlLv2gzv6hNpJPB1XEqVuuRKNvMCz1blRJlaj MYEO2UP5buLcMIghdUDmLZVDkzBcuMPDJmp0s4oZeWoAWzP6VQNtVFu8Pv0Hufi2m2CR MP6f8Iyb6NDbxOHItzssAYsC2Wjie0VHUyM+N+OS9uExYR9+3huBeVIaGbPzE5ZBJxGl d3Ll0lExzrCvk+XsrG6cqEHTxI9bYXZZ5UZa9cpFfgaKjHXKz24aTkvNFG3IVsKV+cIJ Xjjg== X-Forwarded-Encrypted: i=1; AJvYcCW6Z2UKucKrEKeyPwkwen5kJUmxTxRUdGM9jK2SblCIDfwr+Jg2dW8sFxOtbw/ZnctxWiZ7eqB6rZ/snzE=@vger.kernel.org X-Gm-Message-State: AOJu0YyRElixuxKywbRfJrn/EW6Rt838L9yyfq33BfgCdk0jYUpxb6i9 jgkDfpWTU//JbDS8G7elJt96+0gVcNxpbLpW/SlUS4ufZygadGYtHvrHHpcTZ9lp2A9ERb7cZyn 9eSAvLbgjnpLnv4899PaoiZobpG0ma77RnFjx X-Gm-Gg: ASbGncvUU9coNI8YsBbezKvpBN/couiIVt5Oe2Ml/7CZfkjRoiIidfz4wHTs9h4z9jB A7sqVFLloUKzbWqSH2+r7uN1Wrx66QwHIbKeXcVl9Jsf+Gxp2jEk+lUGP3wCq4txHK0COzv3+5p l2nNCNHBlt3jcWSiUJFZYnFFTChVeHKwBRq0T/HCOblZv8dueWFY6ByYTb5DwiTzCrMZKMviAKL yW5IgFQTaeF48t7LIdUYVOEV6F2toCGPC/AG3KL7FPz8SK1IxJ4PbN0FJT4n/StL7iWiznwS49m VTX1dvJe3er/f4Wj705nGwX6a/Hv1jeBAfjBMnfA2B4Lwy86APbZQkYL6tghJ4rQaL9OBfC4u4U Y6iOSglFu84X/3fSiLxRvO7cn9oza3bg= X-Google-Smtp-Source: AGHT+IENasqP0ZhtKF05fLncLOW0V3k4buf18cx1cMtCyilYnh1Ti+hR7o9I8wqQWZYLbNuPmDGjdwUp6Ttv X-Received: by 2002:a05:600c:c4a7:b0:475:d7fe:87a5 with SMTP id 5b1f17b1804b1-475d7fe88b4mr40752065e9.6.1761530588575; Sun, 26 Oct 2025 19:03:08 -0700 (PDT) Received: from c7-smtp-2023.dev.purestorage.com ([208.88.159.128]) by smtp-relay.gmail.com with ESMTPS id 5b1f17b1804b1-475dcbe5592sm5859395e9.0.2025.10.26.19.03.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 26 Oct 2025 19:03:08 -0700 (PDT) X-Relaying-Domain: purestorage.com Received: from dev-csander.dev.purestorage.com (dev-csander.dev.purestorage.com [10.7.70.37]) by c7-smtp-2023.dev.purestorage.com (Postfix) with ESMTP id 2F7A0341D24; Sun, 26 Oct 2025 20:03:06 -0600 (MDT) Received: by dev-csander.dev.purestorage.com (Postfix, from userid 1557716354) id 2CC8BE46586; Sun, 26 Oct 2025 20:03:06 -0600 (MDT) From: Caleb Sander Mateos To: Jens Axboe , Miklos Szeredi , Ming Lei , Keith Busch , Christoph Hellwig , Sagi Grimberg , Chris Mason , David Sterba Cc: io-uring@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-block@vger.kernel.org, linux-nvme@lists.infradead.org, linux-btrfs@vger.kernel.org, linux-kernel@vger.kernel.org, Caleb Sander Mateos Subject: [PATCH v3 2/4] io_uring/uring_cmd: call io_should_terminate_tw() when needed Date: Sun, 26 Oct 2025 20:03:00 -0600 Message-ID: <20251027020302.822544-3-csander@purestorage.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20251027020302.822544-1-csander@purestorage.com> References: <20251027020302.822544-1-csander@purestorage.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Most uring_cmd task work callbacks don't check IO_URING_F_TASK_DEAD. But it's computed unconditionally in io_uring_cmd_work(). Add a helper io_uring_cmd_should_terminate_tw() and call it instead of checking IO_URING_F_TASK_DEAD in the one callback, fuse_uring_send_in_task(). Remove the now unused IO_URING_F_TASK_DEAD. Signed-off-by: Caleb Sander Mateos --- fs/fuse/dev_uring.c | 2 +- include/linux/io_uring/cmd.h | 7 ++++++- include/linux/io_uring_types.h | 1 - io_uring/uring_cmd.c | 6 +----- 4 files changed, 8 insertions(+), 8 deletions(-) diff --git a/fs/fuse/dev_uring.c b/fs/fuse/dev_uring.c index f6b12aebb8bb..71b0c9662716 100644 --- a/fs/fuse/dev_uring.c +++ b/fs/fuse/dev_uring.c @@ -1214,11 +1214,11 @@ static void fuse_uring_send_in_task(struct io_uring= _cmd *cmd, { struct fuse_ring_ent *ent =3D uring_cmd_to_ring_ent(cmd); struct fuse_ring_queue *queue =3D ent->queue; int err; =20 - if (!(issue_flags & IO_URING_F_TASK_DEAD)) { + if (!io_uring_cmd_should_terminate_tw(cmd)) { err =3D fuse_uring_prepare_send(ent, ent->fuse_req); if (err) { fuse_uring_next_fuse_req(ent, queue, issue_flags); return; } diff --git a/include/linux/io_uring/cmd.h b/include/linux/io_uring/cmd.h index 7509025b4071..b84b97c21b43 100644 --- a/include/linux/io_uring/cmd.h +++ b/include/linux/io_uring/cmd.h @@ -1,11 +1,11 @@ /* SPDX-License-Identifier: GPL-2.0-or-later */ #ifndef _LINUX_IO_URING_CMD_H #define _LINUX_IO_URING_CMD_H =20 #include -#include +#include #include =20 /* only top 8 bits of sqe->uring_cmd_flags for kernel internal use */ #define IORING_URING_CMD_CANCELABLE (1U << 30) /* io_uring_cmd is being issued again */ @@ -143,10 +143,15 @@ static inline void io_uring_cmd_complete_in_task(stru= ct io_uring_cmd *ioucmd, io_uring_cmd_tw_t task_work_cb) { __io_uring_cmd_do_in_task(ioucmd, task_work_cb, 0); } =20 +static inline bool io_uring_cmd_should_terminate_tw(struct io_uring_cmd *c= md) +{ + return io_should_terminate_tw(cmd_to_io_kiocb(cmd)->ctx); +} + static inline struct task_struct *io_uring_cmd_get_task(struct io_uring_cm= d *cmd) { return cmd_to_io_kiocb(cmd)->tctx->task; } =20 diff --git a/include/linux/io_uring_types.h b/include/linux/io_uring_types.h index c2ea6280901d..278c4a25c9e8 100644 --- a/include/linux/io_uring_types.h +++ b/include/linux/io_uring_types.h @@ -37,11 +37,10 @@ enum io_uring_cmd_flags { IO_URING_F_IOPOLL =3D (1 << 10), =20 /* set when uring wants to cancel a previously issued command */ IO_URING_F_CANCEL =3D (1 << 11), IO_URING_F_COMPAT =3D (1 << 12), - IO_URING_F_TASK_DEAD =3D (1 << 13), }; =20 struct io_wq_work_node { struct io_wq_work_node *next; }; diff --git a/io_uring/uring_cmd.c b/io_uring/uring_cmd.c index d1e3ba62ee8e..35bdac35cf4d 100644 --- a/io_uring/uring_cmd.c +++ b/io_uring/uring_cmd.c @@ -114,17 +114,13 @@ void io_uring_cmd_mark_cancelable(struct io_uring_cmd= *cmd, EXPORT_SYMBOL_GPL(io_uring_cmd_mark_cancelable); =20 static void io_uring_cmd_work(struct io_kiocb *req, io_tw_token_t tw) { struct io_uring_cmd *ioucmd =3D io_kiocb_to_cmd(req, struct io_uring_cmd); - unsigned int flags =3D IO_URING_F_COMPLETE_DEFER; - - if (io_should_terminate_tw(req->ctx)) - flags |=3D IO_URING_F_TASK_DEAD; =20 /* task_work executor checks the deffered list completion */ - ioucmd->task_work_cb(ioucmd, flags); + ioucmd->task_work_cb(ioucmd, IO_URING_F_COMPLETE_DEFER); } =20 void __io_uring_cmd_do_in_task(struct io_uring_cmd *ioucmd, io_uring_cmd_tw_t task_work_cb, unsigned flags) --=20 2.45.2 From nobody Sat Feb 7 06:34:11 2026 Received: from mail-wm1-f97.google.com (mail-wm1-f97.google.com [209.85.128.97]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 52E772C08CA for ; Mon, 27 Oct 2025 02:03:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.97 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761530594; cv=none; b=Bgv5Qzl+rxS9qL4zymaxP7FNKUspsw8v+lopuJBzoYDC9btXzIQduXFXPRmHXA3mgeHPGuz0zIXgwhscXa5dheHFbgKloT25i96+7mwJpnOFHODm5hv7BvTlOoMSPrQvTnoVyGlBqPfgspAoeBBXLFHcgPInefyQIjavhg+yLYA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761530594; c=relaxed/simple; bh=vtJ64bqCcku7a40FOhzPZxUojy1Mgm3l5D0dgO9UeSk=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Qh6NU4CQW5L40JbRKA2YMJg9UFveQ2DVFrnj8qzDFyvToCRRJUHoAty7gfA6S9o0S+V5bpMO1nbQbriZLnNM8rYxKOr9whEYZ0a5lV1+T93BgtXqTgkCmD3zai1fIrfxUfVL7TzIEJEh99RHAxqf1SINnuy73kVksYeQ/qsZ8Rk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=purestorage.com; spf=fail smtp.mailfrom=purestorage.com; dkim=pass (2048-bit key) header.d=purestorage.com header.i=@purestorage.com header.b=AM4joRb6; arc=none smtp.client-ip=209.85.128.97 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=purestorage.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=purestorage.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=purestorage.com header.i=@purestorage.com header.b="AM4joRb6" Received: by mail-wm1-f97.google.com with SMTP id 5b1f17b1804b1-4769bb77eb9so373385e9.2 for ; Sun, 26 Oct 2025 19:03:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=purestorage.com; s=google2022; t=1761530588; x=1762135388; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=Gyj5Mu64din1Dku2oNVne+Zp21BejM/Odf+W0e4JKPg=; b=AM4joRb6gMwZlYZtXlkqd0TRfJMrLG9vnRc5CXHg52zdRh6SApq3ytuR+21+KezSjp Ndg8Vq6ksqZi2CCyfFuBS5FoAjob75RJshpswkYyl7ONlXs+LNH7vcEtVH+vVMkcN2wS 8gsEhGSWfyeBlO6U/PkbljcJrAqBZZY/xqiKLXjPVAu5DtI7uPvCW4/31w//TrWoFJHP lAQ9YBw3WM+WCFPN/YvLX2IiA3H262ELLlzKmGTlddbEYPi/SIHEeq6lQfdl5XDZQX5s RKkCGUa1kHXTpxJoqprhY6IoeNtI7sNlx0918iaOw8HgEHI7PYjxWHlyW9Ti/XN7NDnx wDTw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1761530588; x=1762135388; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=Gyj5Mu64din1Dku2oNVne+Zp21BejM/Odf+W0e4JKPg=; b=I2yl99ZGiuN6hleYfxQX+gLLf/sFSJfxGRIP+qFuZ0Y8HtYO0BNGA8QVGXTR620dfQ B0iU08p0AopV2rg1sP3Z4URk0p8gXi/+UlgoQv1pZhmwgUZGHTbkwucYdOmzwEd1BtsE V9cDoy3axFggouDDuUVt2uIZKvSu+tylDTixXO6F+aU8ueCyUwei8w10R8GmM8nnvQZj IHQifCGjB4UuyU0ZKDcLIpYPr9XQtuFRxD1FvCHAqzRUTnYI8/ei5GFHBrmhyxlmZjnG NN8tjRbpvGMg1rqohpFLJmf1y6tD1giDKSBFsTHDIBnnx0H2sQQg3hA7akV739MlFEdq ug3Q== X-Forwarded-Encrypted: i=1; AJvYcCWxkmf9ClVkVkhlnvhjaSAJbzHa/46aP0auQvvHX3KAAHA+XsorfVeKkAtFOxBgh+ukYhbNABgcaeZmClc=@vger.kernel.org X-Gm-Message-State: AOJu0YzfhA3QHHiQv601oiWq+102IYcL4w4ePgah9fgVSf4hek0Tkj9q z8DQmcYHCevC23sSIFQjue/2w2w2zi5y3/Y0KSVZkM7ttnNtBwLOUM8stqU5q6iy5naFgfQ73FE AyQmfrMba1Hvy5GSg5eGCGlKFincnyhRalZhI X-Gm-Gg: ASbGnctZLhlFhpIoWggR09eaHCka5WZBuhAEOo9bxgR3iGvaSpKPDdFbrlEeeBdqI1z PAD3g/4K9JYqQ58QsW8tv0pgbHQAYvqefq8/3LKXhvlxIeCk1aUy04l4uMse+lmB3Nast2rOkNV 6ryMaLe8oBec22YdJTVL3CIm1qJeCDU5nBJZCSBc47FAICg5beHJlcCG3p56W084ei4bKlUqV3p +plnuIKKqlHVoHV+nK1QxVoNZQbYvhOV/pnmj9lA2IxUg2idmSHkTAqWXbPTCWmoce9+GkxFysW RtkliIfTrstl3HgOI162msTYiWeeudGvWS43sVFttKHp+Utw2j3fi+J9HGkIwMQfgFxuqVlBlhH /sLUZZ6yi2HktTnQ5APp5JJLw0MlxCsk= X-Google-Smtp-Source: AGHT+IG56HLBvuL0FtqNYOOp2hQlRU6Y0af6uEaNJZ5PDL+CmiwbOCX51feGfad+fUQ8TvQuAlR1GpgMK35S X-Received: by 2002:a05:600c:3b02:b0:471:161b:4244 with SMTP id 5b1f17b1804b1-4749439ec9cmr98490245e9.5.1761530588169; Sun, 26 Oct 2025 19:03:08 -0700 (PDT) Received: from c7-smtp-2023.dev.purestorage.com ([208.88.159.128]) by smtp-relay.gmail.com with ESMTPS id 5b1f17b1804b1-475dd06cc86sm5636205e9.1.2025.10.26.19.03.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 26 Oct 2025 19:03:08 -0700 (PDT) X-Relaying-Domain: purestorage.com Received: from dev-csander.dev.purestorage.com (dev-csander.dev.purestorage.com [10.7.70.37]) by c7-smtp-2023.dev.purestorage.com (Postfix) with ESMTP id A21A1341F2A; Sun, 26 Oct 2025 20:03:06 -0600 (MDT) Received: by dev-csander.dev.purestorage.com (Postfix, from userid 1557716354) id 9ED65E46586; Sun, 26 Oct 2025 20:03:06 -0600 (MDT) From: Caleb Sander Mateos To: Jens Axboe , Miklos Szeredi , Ming Lei , Keith Busch , Christoph Hellwig , Sagi Grimberg , Chris Mason , David Sterba Cc: io-uring@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-block@vger.kernel.org, linux-nvme@lists.infradead.org, linux-btrfs@vger.kernel.org, linux-kernel@vger.kernel.org, Caleb Sander Mateos Subject: [PATCH v3 3/4] io_uring: add wrapper type for io_req_tw_func_t arg Date: Sun, 26 Oct 2025 20:03:01 -0600 Message-ID: <20251027020302.822544-4-csander@purestorage.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20251027020302.822544-1-csander@purestorage.com> References: <20251027020302.822544-1-csander@purestorage.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" In preparation for uring_cmd implementations to implement functions with the io_req_tw_func_t signature, introduce a wrapper struct io_tw_req to hide the struct io_kiocb * argument. The intention is for only the io_uring core to access the inner struct io_kiocb *. uring_cmd implementations should instead call a helper from io_uring/cmd.h to convert struct io_tw_req to struct io_uring_cmd *. Signed-off-by: Caleb Sander Mateos --- include/linux/io_uring_types.h | 6 +++++- io_uring/futex.c | 16 +++++++++------- io_uring/io_uring.c | 21 ++++++++++++--------- io_uring/io_uring.h | 4 ++-- io_uring/msg_ring.c | 3 ++- io_uring/notif.c | 5 +++-- io_uring/poll.c | 11 ++++++----- io_uring/poll.h | 2 +- io_uring/rw.c | 5 +++-- io_uring/rw.h | 2 +- io_uring/timeout.c | 18 +++++++++++------- io_uring/uring_cmd.c | 3 ++- io_uring/waitid.c | 7 ++++--- 13 files changed, 61 insertions(+), 42 deletions(-) diff --git a/include/linux/io_uring_types.h b/include/linux/io_uring_types.h index 278c4a25c9e8..1326eb9ad899 100644 --- a/include/linux/io_uring_types.h +++ b/include/linux/io_uring_types.h @@ -611,11 +611,15 @@ enum { REQ_F_IMPORT_BUFFER =3D IO_REQ_FLAG(REQ_F_IMPORT_BUFFER_BIT), /* ->sqe_copy() has been called, if necessary */ REQ_F_SQE_COPIED =3D IO_REQ_FLAG(REQ_F_SQE_COPIED_BIT), }; =20 -typedef void (*io_req_tw_func_t)(struct io_kiocb *req, io_tw_token_t tw); +struct io_tw_req { + struct io_kiocb *req; +}; + +typedef void (*io_req_tw_func_t)(struct io_tw_req tw_req, io_tw_token_t tw= ); =20 struct io_task_work { struct llist_node node; io_req_tw_func_t func; }; diff --git a/io_uring/futex.c b/io_uring/futex.c index 64f3bd51c84c..4e022c76236d 100644 --- a/io_uring/futex.c +++ b/io_uring/futex.c @@ -39,28 +39,30 @@ bool io_futex_cache_init(struct io_ring_ctx *ctx) void io_futex_cache_free(struct io_ring_ctx *ctx) { io_alloc_cache_free(&ctx->futex_cache, kfree); } =20 -static void __io_futex_complete(struct io_kiocb *req, io_tw_token_t tw) +static void __io_futex_complete(struct io_tw_req tw_req, io_tw_token_t tw) { - hlist_del_init(&req->hash_node); - io_req_task_complete(req, tw); + hlist_del_init(&tw_req.req->hash_node); + io_req_task_complete(tw_req, tw); } =20 -static void io_futex_complete(struct io_kiocb *req, io_tw_token_t tw) +static void io_futex_complete(struct io_tw_req tw_req, io_tw_token_t tw) { + struct io_kiocb *req =3D tw_req.req; struct io_ring_ctx *ctx =3D req->ctx; =20 io_tw_lock(ctx, tw); io_cache_free(&ctx->futex_cache, req->async_data); io_req_async_data_clear(req, 0); - __io_futex_complete(req, tw); + __io_futex_complete(tw_req, tw); } =20 -static void io_futexv_complete(struct io_kiocb *req, io_tw_token_t tw) +static void io_futexv_complete(struct io_tw_req tw_req, io_tw_token_t tw) { + struct io_kiocb *req =3D tw_req.req; struct io_futex *iof =3D io_kiocb_to_cmd(req, struct io_futex); struct futex_vector *futexv =3D req->async_data; =20 io_tw_lock(req->ctx, tw); =20 @@ -71,11 +73,11 @@ static void io_futexv_complete(struct io_kiocb *req, io= _tw_token_t tw) if (res !=3D -1) io_req_set_res(req, res, 0); } =20 io_req_async_data_free(req); - __io_futex_complete(req, tw); + __io_futex_complete(tw_req, tw); } =20 static bool io_futexv_claim(struct io_futex *iof) { if (test_bit(0, &iof->futexv_owned) || diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c index 296667ba712c..4eb3e47a8184 100644 --- a/io_uring/io_uring.c +++ b/io_uring/io_uring.c @@ -274,11 +274,11 @@ static __cold void io_fallback_req_func(struct work_s= truct *work) struct io_tw_state ts =3D {}; =20 percpu_ref_get(&ctx->refs); mutex_lock(&ctx->uring_lock); llist_for_each_entry_safe(req, tmp, node, io_task_work.node) - req->io_task_work.func(req, ts); + req->io_task_work.func((struct io_tw_req){req}, ts); io_submit_flush_completions(ctx); mutex_unlock(&ctx->uring_lock); percpu_ref_put(&ctx->refs); } =20 @@ -522,13 +522,13 @@ static void io_queue_iowq(struct io_kiocb *req) =20 trace_io_uring_queue_async_work(req, io_wq_is_hashed(&req->work)); io_wq_enqueue(tctx->io_wq, &req->work); } =20 -static void io_req_queue_iowq_tw(struct io_kiocb *req, io_tw_token_t tw) +static void io_req_queue_iowq_tw(struct io_tw_req tw_req, io_tw_token_t tw) { - io_queue_iowq(req); + io_queue_iowq(tw_req.req); } =20 void io_req_queue_iowq(struct io_kiocb *req) { req->io_task_work.func =3D io_req_queue_iowq_tw; @@ -1148,11 +1148,11 @@ struct llist_node *io_handle_tw_list(struct llist_n= ode *node, mutex_lock(&ctx->uring_lock); percpu_ref_get(&ctx->refs); } INDIRECT_CALL_2(req->io_task_work.func, io_poll_task_func, io_req_rw_complete, - req, ts); + (struct io_tw_req){req}, ts); node =3D next; (*count)++; if (unlikely(need_resched())) { ctx_flush_and_put(ctx, ts); ctx =3D NULL; @@ -1376,11 +1376,11 @@ static int __io_run_local_work_loop(struct llist_no= de **node, struct llist_node *next =3D (*node)->next; struct io_kiocb *req =3D container_of(*node, struct io_kiocb, io_task_work.node); INDIRECT_CALL_2(req->io_task_work.func, io_poll_task_func, io_req_rw_complete, - req, tw); + (struct io_tw_req){req}, tw); *node =3D next; if (++ret >=3D events) break; } =20 @@ -1445,18 +1445,21 @@ static int io_run_local_work(struct io_ring_ctx *ct= x, int min_events, ret =3D __io_run_local_work(ctx, ts, min_events, max_events); mutex_unlock(&ctx->uring_lock); return ret; } =20 -static void io_req_task_cancel(struct io_kiocb *req, io_tw_token_t tw) +static void io_req_task_cancel(struct io_tw_req tw_req, io_tw_token_t tw) { + struct io_kiocb *req =3D tw_req.req; + io_tw_lock(req->ctx, tw); io_req_defer_failed(req, req->cqe.res); } =20 -void io_req_task_submit(struct io_kiocb *req, io_tw_token_t tw) +void io_req_task_submit(struct io_tw_req tw_req, io_tw_token_t tw) { + struct io_kiocb *req =3D tw_req.req; struct io_ring_ctx *ctx =3D req->ctx; =20 io_tw_lock(ctx, tw); if (unlikely(io_should_terminate_tw(ctx))) io_req_defer_failed(req, -EFAULT); @@ -1688,13 +1691,13 @@ static int io_iopoll_check(struct io_ring_ctx *ctx,= unsigned int min_events) } while (nr_events < min_events); =20 return 0; } =20 -void io_req_task_complete(struct io_kiocb *req, io_tw_token_t tw) +void io_req_task_complete(struct io_tw_req tw_req, io_tw_token_t tw) { - io_req_complete_defer(req); + io_req_complete_defer(tw_req.req); } =20 /* * After the iocb has been issued, it's safe to be found on the poll list. * Adding the kiocb to the list AFTER submission ensures that we don't diff --git a/io_uring/io_uring.h b/io_uring/io_uring.h index 78777bf1ea4b..fc1b1adb9d99 100644 --- a/io_uring/io_uring.h +++ b/io_uring/io_uring.h @@ -146,13 +146,13 @@ struct file *io_file_get_fixed(struct io_kiocb *req, = int fd, unsigned issue_flags); =20 void __io_req_task_work_add(struct io_kiocb *req, unsigned flags); void io_req_task_work_add_remote(struct io_kiocb *req, unsigned flags); void io_req_task_queue(struct io_kiocb *req); -void io_req_task_complete(struct io_kiocb *req, io_tw_token_t tw); +void io_req_task_complete(struct io_tw_req tw_req, io_tw_token_t tw); void io_req_task_queue_fail(struct io_kiocb *req, int ret); -void io_req_task_submit(struct io_kiocb *req, io_tw_token_t tw); +void io_req_task_submit(struct io_tw_req tw_req, io_tw_token_t tw); struct llist_node *io_handle_tw_list(struct llist_node *node, unsigned int= *count, unsigned int max_entries); struct llist_node *tctx_task_work_run(struct io_uring_task *tctx, unsigned= int max_entries, unsigned int *count); void tctx_task_work(struct callback_head *cb); __cold void io_uring_cancel_generic(bool cancel_all, struct io_sq_data *sq= d); =20 diff --git a/io_uring/msg_ring.c b/io_uring/msg_ring.c index 5e5b94236d72..7063ea7964e7 100644 --- a/io_uring/msg_ring.c +++ b/io_uring/msg_ring.c @@ -68,12 +68,13 @@ void io_msg_ring_cleanup(struct io_kiocb *req) static inline bool io_msg_need_remote(struct io_ring_ctx *target_ctx) { return target_ctx->task_complete; } =20 -static void io_msg_tw_complete(struct io_kiocb *req, io_tw_token_t tw) +static void io_msg_tw_complete(struct io_tw_req tw_req, io_tw_token_t tw) { + struct io_kiocb *req =3D tw_req.req; struct io_ring_ctx *ctx =3D req->ctx; =20 io_add_aux_cqe(ctx, req->cqe.user_data, req->cqe.res, req->cqe.flags); kfree_rcu(req, rcu_head); percpu_ref_put(&ctx->refs); diff --git a/io_uring/notif.c b/io_uring/notif.c index d8ba1165c949..9960bb2a32d5 100644 --- a/io_uring/notif.c +++ b/io_uring/notif.c @@ -9,12 +9,13 @@ #include "notif.h" #include "rsrc.h" =20 static const struct ubuf_info_ops io_ubuf_ops; =20 -static void io_notif_tw_complete(struct io_kiocb *notif, io_tw_token_t tw) +static void io_notif_tw_complete(struct io_tw_req tw_req, io_tw_token_t tw) { + struct io_kiocb *notif =3D tw_req.req; struct io_notif_data *nd =3D io_notif_to_data(notif); struct io_ring_ctx *ctx =3D notif->ctx; =20 lockdep_assert_held(&ctx->uring_lock); =20 @@ -32,11 +33,11 @@ static void io_notif_tw_complete(struct io_kiocb *notif= , io_tw_token_t tw) __io_unaccount_mem(notif->ctx->user, nd->account_pages); nd->account_pages =3D 0; } =20 nd =3D nd->next; - io_req_task_complete(notif, tw); + io_req_task_complete((struct io_tw_req){notif}, tw); } while (nd); } =20 void io_tx_ubuf_complete(struct sk_buff *skb, struct ubuf_info *uarg, bool success) diff --git a/io_uring/poll.c b/io_uring/poll.c index b9681d0f9f13..9a6925d5688c 100644 --- a/io_uring/poll.c +++ b/io_uring/poll.c @@ -308,12 +308,13 @@ static int io_poll_check_events(struct io_kiocb *req,= io_tw_token_t tw) =20 io_napi_add(req); return IOU_POLL_NO_ACTION; } =20 -void io_poll_task_func(struct io_kiocb *req, io_tw_token_t tw) +void io_poll_task_func(struct io_tw_req tw_req, io_tw_token_t tw) { + struct io_kiocb *req =3D tw_req.req; int ret; =20 ret =3D io_poll_check_events(req, tw); if (ret =3D=3D IOU_POLL_NO_ACTION) { return; @@ -330,26 +331,26 @@ void io_poll_task_func(struct io_kiocb *req, io_tw_to= ken_t tw) struct io_poll *poll; =20 poll =3D io_kiocb_to_cmd(req, struct io_poll); req->cqe.res =3D mangle_poll(req->cqe.res & poll->events); } else if (ret =3D=3D IOU_POLL_REISSUE) { - io_req_task_submit(req, tw); + io_req_task_submit(tw_req, tw); return; } else if (ret !=3D IOU_POLL_REMOVE_POLL_USE_RES) { req->cqe.res =3D ret; req_set_fail(req); } =20 io_req_set_res(req, req->cqe.res, 0); - io_req_task_complete(req, tw); + io_req_task_complete(tw_req, tw); } else { io_tw_lock(req->ctx, tw); =20 if (ret =3D=3D IOU_POLL_REMOVE_POLL_USE_RES) - io_req_task_complete(req, tw); + io_req_task_complete(tw_req, tw); else if (ret =3D=3D IOU_POLL_DONE || ret =3D=3D IOU_POLL_REISSUE) - io_req_task_submit(req, tw); + io_req_task_submit(tw_req, tw); else io_req_defer_failed(req, ret); } } =20 diff --git a/io_uring/poll.h b/io_uring/poll.h index c8438286dfa0..5647c5138932 100644 --- a/io_uring/poll.h +++ b/io_uring/poll.h @@ -44,6 +44,6 @@ int io_poll_cancel(struct io_ring_ctx *ctx, struct io_can= cel_data *cd, int io_arm_apoll(struct io_kiocb *req, unsigned issue_flags, __poll_t mask= ); int io_arm_poll_handler(struct io_kiocb *req, unsigned issue_flags); bool io_poll_remove_all(struct io_ring_ctx *ctx, struct io_uring_task *tct= x, bool cancel_all); =20 -void io_poll_task_func(struct io_kiocb *req, io_tw_token_t tw); +void io_poll_task_func(struct io_tw_req tw_req, io_tw_token_t tw); diff --git a/io_uring/rw.c b/io_uring/rw.c index 5b2241a5813c..828ac4f902b4 100644 --- a/io_uring/rw.c +++ b/io_uring/rw.c @@ -562,12 +562,13 @@ static inline int io_fixup_rw_res(struct io_kiocb *re= q, long res) res +=3D io->bytes_done; } return res; } =20 -void io_req_rw_complete(struct io_kiocb *req, io_tw_token_t tw) +void io_req_rw_complete(struct io_tw_req tw_req, io_tw_token_t tw) { + struct io_kiocb *req =3D tw_req.req; struct io_rw *rw =3D io_kiocb_to_cmd(req, struct io_rw); struct kiocb *kiocb =3D &rw->kiocb; =20 if ((kiocb->ki_flags & IOCB_DIO_CALLER_COMP) && kiocb->dio_complete) { long res =3D kiocb->dio_complete(rw->kiocb.private); @@ -579,11 +580,11 @@ void io_req_rw_complete(struct io_kiocb *req, io_tw_t= oken_t tw) =20 if (req->flags & (REQ_F_BUFFER_SELECTED|REQ_F_BUFFER_RING)) req->cqe.flags |=3D io_put_kbuf(req, req->cqe.res, NULL); =20 io_req_rw_cleanup(req, 0); - io_req_task_complete(req, tw); + io_req_task_complete(tw_req, tw); } =20 static void io_complete_rw(struct kiocb *kiocb, long res) { struct io_rw *rw =3D container_of(kiocb, struct io_rw, kiocb); diff --git a/io_uring/rw.h b/io_uring/rw.h index 129a53fe5482..9bd7fbf70ea9 100644 --- a/io_uring/rw.h +++ b/io_uring/rw.h @@ -44,9 +44,9 @@ int io_read(struct io_kiocb *req, unsigned int issue_flag= s); int io_write(struct io_kiocb *req, unsigned int issue_flags); int io_read_fixed(struct io_kiocb *req, unsigned int issue_flags); int io_write_fixed(struct io_kiocb *req, unsigned int issue_flags); void io_readv_writev_cleanup(struct io_kiocb *req); void io_rw_fail(struct io_kiocb *req); -void io_req_rw_complete(struct io_kiocb *req, io_tw_token_t tw); +void io_req_rw_complete(struct io_tw_req tw_req, io_tw_token_t tw); int io_read_mshot_prep(struct io_kiocb *req, const struct io_uring_sqe *sq= e); int io_read_mshot(struct io_kiocb *req, unsigned int issue_flags); void io_rw_cache_free(const void *entry); diff --git a/io_uring/timeout.c b/io_uring/timeout.c index 17e3aab0af36..d368eaacddb6 100644 --- a/io_uring/timeout.c +++ b/io_uring/timeout.c @@ -66,12 +66,13 @@ static inline bool io_timeout_finish(struct io_timeout = *timeout, return true; } =20 static enum hrtimer_restart io_timeout_fn(struct hrtimer *timer); =20 -static void io_timeout_complete(struct io_kiocb *req, io_tw_token_t tw) +static void io_timeout_complete(struct io_tw_req tw_req, io_tw_token_t tw) { + struct io_kiocb *req =3D tw_req.req; struct io_timeout *timeout =3D io_kiocb_to_cmd(req, struct io_timeout); struct io_timeout_data *data =3D req->async_data; struct io_ring_ctx *ctx =3D req->ctx; =20 if (!io_timeout_finish(timeout, data)) { @@ -83,11 +84,11 @@ static void io_timeout_complete(struct io_kiocb *req, i= o_tw_token_t tw) raw_spin_unlock_irq(&ctx->timeout_lock); return; } } =20 - io_req_task_complete(req, tw); + io_req_task_complete(tw_req, tw); } =20 static __cold bool io_flush_killed_timeouts(struct list_head *list, int er= r) { if (list_empty(list)) @@ -155,22 +156,24 @@ __cold void io_flush_timeouts(struct io_ring_ctx *ctx) ctx->cq_last_tm_flush =3D seq; raw_spin_unlock_irq(&ctx->timeout_lock); io_flush_killed_timeouts(&list, 0); } =20 -static void io_req_tw_fail_links(struct io_kiocb *link, io_tw_token_t tw) +static void io_req_tw_fail_links(struct io_tw_req tw_req, io_tw_token_t tw) { + struct io_kiocb *link =3D tw_req.req; + io_tw_lock(link->ctx, tw); while (link) { struct io_kiocb *nxt =3D link->link; long res =3D -ECANCELED; =20 if (link->flags & REQ_F_FAIL) res =3D link->cqe.res; link->link =3D NULL; io_req_set_res(link, res, 0); - io_req_task_complete(link, tw); + io_req_task_complete((struct io_tw_req){link}, tw); link =3D nxt; } } =20 static void io_fail_links(struct io_kiocb *req) @@ -315,12 +318,13 @@ int io_timeout_cancel(struct io_ring_ctx *ctx, struct= io_cancel_data *cd) return PTR_ERR(req); io_req_task_queue_fail(req, -ECANCELED); return 0; } =20 -static void io_req_task_link_timeout(struct io_kiocb *req, io_tw_token_t t= w) +static void io_req_task_link_timeout(struct io_tw_req tw_req, io_tw_token_= t tw) { + struct io_kiocb *req =3D tw_req.req; struct io_timeout *timeout =3D io_kiocb_to_cmd(req, struct io_timeout); struct io_kiocb *prev =3D timeout->prev; int ret; =20 if (prev) { @@ -333,15 +337,15 @@ static void io_req_task_link_timeout(struct io_kiocb = *req, io_tw_token_t tw) ret =3D io_try_cancel(req->tctx, &cd, 0); } else { ret =3D -ECANCELED; } io_req_set_res(req, ret ?: -ETIME, 0); - io_req_task_complete(req, tw); + io_req_task_complete(tw_req, tw); io_put_req(prev); } else { io_req_set_res(req, -ETIME, 0); - io_req_task_complete(req, tw); + io_req_task_complete(tw_req, tw); } } =20 static enum hrtimer_restart io_link_timeout_fn(struct hrtimer *timer) { diff --git a/io_uring/uring_cmd.c b/io_uring/uring_cmd.c index 35bdac35cf4d..fc2955e8caaf 100644 --- a/io_uring/uring_cmd.c +++ b/io_uring/uring_cmd.c @@ -111,12 +111,13 @@ void io_uring_cmd_mark_cancelable(struct io_uring_cmd= *cmd, io_ring_submit_unlock(ctx, issue_flags); } } EXPORT_SYMBOL_GPL(io_uring_cmd_mark_cancelable); =20 -static void io_uring_cmd_work(struct io_kiocb *req, io_tw_token_t tw) +static void io_uring_cmd_work(struct io_tw_req tw_req, io_tw_token_t tw) { + struct io_kiocb *req =3D tw_req.req; struct io_uring_cmd *ioucmd =3D io_kiocb_to_cmd(req, struct io_uring_cmd); =20 /* task_work executor checks the deffered list completion */ ioucmd->task_work_cb(ioucmd, IO_URING_F_COMPLETE_DEFER); } diff --git a/io_uring/waitid.c b/io_uring/waitid.c index 53532ae6256c..326a8d090772 100644 --- a/io_uring/waitid.c +++ b/io_uring/waitid.c @@ -14,11 +14,11 @@ #include "io_uring.h" #include "cancel.h" #include "waitid.h" #include "../kernel/exit.h" =20 -static void io_waitid_cb(struct io_kiocb *req, io_tw_token_t tw); +static void io_waitid_cb(struct io_tw_req tw_req, io_tw_token_t tw); =20 #define IO_WAITID_CANCEL_FLAG BIT(31) #define IO_WAITID_REF_MASK GENMASK(30, 0) =20 struct io_waitid { @@ -177,12 +177,13 @@ static inline bool io_waitid_drop_issue_ref(struct io= _kiocb *req) io_req_task_work_add(req); remove_wait_queue(iw->head, &iwa->wo.child_wait); return true; } =20 -static void io_waitid_cb(struct io_kiocb *req, io_tw_token_t tw) +static void io_waitid_cb(struct io_tw_req tw_req, io_tw_token_t tw) { + struct io_kiocb *req =3D tw_req.req; struct io_waitid_async *iwa =3D req->async_data; struct io_ring_ctx *ctx =3D req->ctx; int ret; =20 io_tw_lock(ctx, tw); @@ -213,11 +214,11 @@ static void io_waitid_cb(struct io_kiocb *req, io_tw_= token_t tw) remove_wait_queue(iw->head, &iwa->wo.child_wait); } } =20 io_waitid_complete(req, ret); - io_req_task_complete(req, tw); + io_req_task_complete(tw_req, tw); } =20 static int io_waitid_wait(struct wait_queue_entry *wait, unsigned mode, int sync, void *key) { --=20 2.45.2 From nobody Sat Feb 7 06:34:11 2026 Received: from mail-wm1-f97.google.com (mail-wm1-f97.google.com [209.85.128.97]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6AE812C0F64 for ; Mon, 27 Oct 2025 02:03:10 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.128.97 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761530594; cv=none; b=lwxvJtbKni0IZuzDPDk008xenwyWpgSt6mgNdNBTev+7XGI38VmZ//yB/3MwLsWSQ6j63pMRhnNpu2JqlJd0oSb6vSQKga5OvRfOJw4tIqFDQt1jzzXpQi4IhWUsHLOXrDuuHv5BxqW/OipwYlxpE1NhyaPfd5V/BRkxaRBAOA0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761530594; c=relaxed/simple; bh=sxkKbdlsh3OuceAnLMYjBI941r+CSc45HUmaifG7JaQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=bZjO7rp5VnJRQogs/GD6ATVvpmlDRdfluUSLK7bZwRYozbTREBp+5N0W8nP6Q4I/OloEzTOaOnjnwxmA5Pt7EoOaUcZnQ71zEB2+VscJBo47bPRz31cgvBKns6YwjTsxq/4nVDVkn+ajKVVvODgAzUV432NORIYRreFmVpbIZIY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=purestorage.com; spf=fail smtp.mailfrom=purestorage.com; dkim=pass (2048-bit key) header.d=purestorage.com header.i=@purestorage.com header.b=QZQW5OdK; arc=none smtp.client-ip=209.85.128.97 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=purestorage.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=purestorage.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=purestorage.com header.i=@purestorage.com header.b="QZQW5OdK" Received: by mail-wm1-f97.google.com with SMTP id 5b1f17b1804b1-472cbd003feso1987315e9.3 for ; Sun, 26 Oct 2025 19:03:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=purestorage.com; s=google2022; t=1761530588; x=1762135388; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=gTHIrYr/JL4uQvWk8eoQKjYhwgUps+99ZLq7CmULDbo=; b=QZQW5OdKhGYBuBxt2mPjc6WzFgIbkJTRRRHRUeKGXNCbDaF3NI1hP+7kkDjR+KiJ1F fGNa8xmPr9kPGZapGTAi+Uhwobk1/3zSd+PrWFsFBeSXGJvnOYM8xB8WowEKgPb7T2q8 6gRdxLeLqTGNyRNEYYodXVAWV06TA+B2ymR5lmldK8KDkc6q4P0v3q9mviWD1VroUAw+ rkN9zdWvbT2B2u/ehbTvedXcD7hEIbqHyBb2GizSKwx/lueMwboTjYmRbnYkH4NuXK4E 3zqbeDMmElzFaozP67tCbVsTFCBVFlfPfkm/9mKwmkerUjsBBPExerxsqOitvbhpsF3m kAKw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1761530588; x=1762135388; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=gTHIrYr/JL4uQvWk8eoQKjYhwgUps+99ZLq7CmULDbo=; b=f40zBpuz2hJBpWFSolkS7pnFFxqIGuNc9Rr2fBIvl+0qQB+BPMeKgsZNgSrFOMkmmD mNtOmhm45KlqVcBwsuE6Ody318+M853wCtqFwu8XsYaOA8JRYMntfC6BsKtHBru82yHP OfErTs6eBznYT2Jcczm+uoVyHylGehPmpP196vEj4lYwm4LDZSHW4a+VSVwoZazOkx2C 0wlvOh+2Tqiiywbw1ZxPamTOm/ZjH5D3tuJR71oPJHWaAgNTRtfFEbEnKcgV2sJlJ0HD OI+HHL4ZX8+5Z5Tra1n/MBUO4VaachtBjXIqonkqrb6WiDTJJnxAzPMLYFeViylggEaE qUDg== X-Forwarded-Encrypted: i=1; AJvYcCWBkPunre2KduZ+Wn/fu7/GGRaAGP+UMY4ETpxKVld/ytJecmBwSmMjV5jv47sk4t9zWChAyxJhFn0BUuc=@vger.kernel.org X-Gm-Message-State: AOJu0YxWw/CvXdMcw3Xpw6AUaC6JW88NUwShbEbQnVNKpe9Y6Qb8byFB xx4O2KPQowW8CeqzGI9JSWNrbQ91WkIhdUfVVB1Ke3R8Ef66uuCXSBH7pKYA/KEkAFJjJGm+2WT F6ep0lGZYlZKFBD1XnCH7eCCEPv8UUdwkT5P7vowg5wMtxlio4QoL X-Gm-Gg: ASbGnctRNV0sy+sN+Uuqd5QAvtdPeH1GsI+DxsemwuCv/Or4v86H4qQkTYiyJBoVT2O o5VjJ1Gel4jKC1m1jAb/fv1bG3HnhBnfMPk4UHiuM2JoNsosdof+5VyxpqcoKkDQJ6lBU71KtsJ 9bFBDZOiKhKMNXTFl4nVeFaWpT5bIqMXI+UO6+AaFpVF1K4NqCUCOvwvHJ0oSbJebh2sYQ1zMRb JrGW9ZMhCxY/crSjT9iS2qiZNXbVkq6l6d4tWO9IT1BRgfvD1meC5pvKZhddoz/LmWNF+ZjAhFF u4s5kFqJsKw/PFqDJKNHJnHkx1dBaLLkZLLZ1D6Yi9rDd7K8fdWUNsSh+QtX7fmLMDxxiwTO/Wt vFZju/KAS76O/CXcV X-Google-Smtp-Source: AGHT+IEzJHwSYBWzz7Sy75kcuAarkrE8pEDkVMhr61B/a1LvJ1VT5CSEpon9FdrgrKuk8HVDvv4MPWdgxEtI X-Received: by 2002:a05:600c:4fcb:b0:471:c4c:5ef with SMTP id 5b1f17b1804b1-4749437b584mr97004145e9.4.1761530588469; Sun, 26 Oct 2025 19:03:08 -0700 (PDT) Received: from c7-smtp-2023.dev.purestorage.com ([2620:125:9017:12:36:3:5:0]) by smtp-relay.gmail.com with ESMTPS id 5b1f17b1804b1-475dd021cb9sm6122735e9.4.2025.10.26.19.03.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 26 Oct 2025 19:03:08 -0700 (PDT) X-Relaying-Domain: purestorage.com Received: from dev-csander.dev.purestorage.com (unknown [IPv6:2620:125:9007:640:ffff::1199]) by c7-smtp-2023.dev.purestorage.com (Postfix) with ESMTP id 297753420A1; Sun, 26 Oct 2025 20:03:07 -0600 (MDT) Received: by dev-csander.dev.purestorage.com (Postfix, from userid 1557716354) id 266E0E46586; Sun, 26 Oct 2025 20:03:07 -0600 (MDT) From: Caleb Sander Mateos To: Jens Axboe , Miklos Szeredi , Ming Lei , Keith Busch , Christoph Hellwig , Sagi Grimberg , Chris Mason , David Sterba Cc: io-uring@vger.kernel.org, linux-fsdevel@vger.kernel.org, linux-block@vger.kernel.org, linux-nvme@lists.infradead.org, linux-btrfs@vger.kernel.org, linux-kernel@vger.kernel.org, Caleb Sander Mateos Subject: [PATCH v3 4/4] io_uring/uring_cmd: avoid double indirect call in task work dispatch Date: Sun, 26 Oct 2025 20:03:02 -0600 Message-ID: <20251027020302.822544-5-csander@purestorage.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20251027020302.822544-1-csander@purestorage.com> References: <20251027020302.822544-1-csander@purestorage.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" io_uring task work dispatch makes an indirect call to struct io_kiocb's io_task_work.func field to allow running arbitrary task work functions. In the uring_cmd case, this calls io_uring_cmd_work(), which immediately makes another indirect call to struct io_uring_cmd's task_work_cb field. Change the uring_cmd task work callbacks to functions whose signatures match io_req_tw_func_t. Add a function io_uring_cmd_from_tw() to convert from the task work's struct io_tw_req argument to struct io_uring_cmd *. Define a constant IO_URING_CMD_TASK_WORK_ISSUE_FLAGS to avoid manufacturing issue_flags in the uring_cmd task work callbacks. Now uring_cmd task work dispatch makes a single indirect call to the uring_cmd implementation's callback. This also allows removing the task_work_cb field from struct io_uring_cmd, freeing up 8 bytes for future storage. Signed-off-by: Caleb Sander Mateos --- block/ioctl.c | 4 +++- drivers/block/ublk_drv.c | 15 +++++++++------ drivers/nvme/host/ioctl.c | 5 +++-- fs/btrfs/ioctl.c | 4 +++- fs/fuse/dev_uring.c | 5 +++-- include/linux/io_uring/cmd.h | 22 +++++++++++++--------- io_uring/uring_cmd.c | 14 ++------------ 7 files changed, 36 insertions(+), 33 deletions(-) diff --git a/block/ioctl.c b/block/ioctl.c index d7489a56b33c..44de038660e7 100644 --- a/block/ioctl.c +++ b/block/ioctl.c @@ -767,12 +767,14 @@ long compat_blkdev_ioctl(struct file *file, unsigned = cmd, unsigned long arg) struct blk_iou_cmd { int res; bool nowait; }; =20 -static void blk_cmd_complete(struct io_uring_cmd *cmd, unsigned int issue_= flags) +static void blk_cmd_complete(struct io_tw_req tw_req, io_tw_token_t tw) { + unsigned int issue_flags =3D IO_URING_CMD_TASK_WORK_ISSUE_FLAGS; + struct io_uring_cmd *cmd =3D io_uring_cmd_from_tw(tw_req); struct blk_iou_cmd *bic =3D io_uring_cmd_to_pdu(cmd, struct blk_iou_cmd); =20 if (bic->res =3D=3D -EAGAIN && bic->nowait) io_uring_cmd_issue_blocking(cmd); else diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c index 0c74a41a6753..bdccd15ba577 100644 --- a/drivers/block/ublk_drv.c +++ b/drivers/block/ublk_drv.c @@ -1346,13 +1346,14 @@ static void ublk_dispatch_req(struct ublk_queue *ub= q, =20 if (ublk_prep_auto_buf_reg(ubq, req, io, issue_flags)) ublk_complete_io_cmd(io, req, UBLK_IO_RES_OK, issue_flags); } =20 -static void ublk_cmd_tw_cb(struct io_uring_cmd *cmd, - unsigned int issue_flags) +static void ublk_cmd_tw_cb(struct io_tw_req tw_req, io_tw_token_t tw) { + unsigned int issue_flags =3D IO_URING_CMD_TASK_WORK_ISSUE_FLAGS; + struct io_uring_cmd *cmd =3D io_uring_cmd_from_tw(tw_req); struct ublk_uring_cmd_pdu *pdu =3D ublk_get_uring_cmd_pdu(cmd); struct ublk_queue *ubq =3D pdu->ubq; =20 ublk_dispatch_req(ubq, pdu->req, issue_flags); } @@ -1364,13 +1365,14 @@ static void ublk_queue_cmd(struct ublk_queue *ubq, = struct request *rq) =20 pdu->req =3D rq; io_uring_cmd_complete_in_task(cmd, ublk_cmd_tw_cb); } =20 -static void ublk_cmd_list_tw_cb(struct io_uring_cmd *cmd, - unsigned int issue_flags) +static void ublk_cmd_list_tw_cb(struct io_tw_req tw_req, io_tw_token_t tw) { + unsigned int issue_flags =3D IO_URING_CMD_TASK_WORK_ISSUE_FLAGS; + struct io_uring_cmd *cmd =3D io_uring_cmd_from_tw(tw_req); struct ublk_uring_cmd_pdu *pdu =3D ublk_get_uring_cmd_pdu(cmd); struct request *rq =3D pdu->req_list; struct request *next; =20 do { @@ -2521,13 +2523,14 @@ static inline struct request *__ublk_check_and_get_= req(struct ublk_device *ub, fail_put: ublk_put_req_ref(io, req); return NULL; } =20 -static void ublk_ch_uring_cmd_cb(struct io_uring_cmd *cmd, - unsigned int issue_flags) +static void ublk_ch_uring_cmd_cb(struct io_tw_req tw_req, io_tw_token_t tw) { + unsigned int issue_flags =3D IO_URING_CMD_TASK_WORK_ISSUE_FLAGS; + struct io_uring_cmd *cmd =3D io_uring_cmd_from_tw(tw_req); int ret =3D ublk_ch_uring_cmd_local(cmd, issue_flags); =20 if (ret !=3D -EIOCBQUEUED) io_uring_cmd_done(cmd, ret, issue_flags); } diff --git a/drivers/nvme/host/ioctl.c b/drivers/nvme/host/ioctl.c index c212fa952c0f..6a2a0ef29674 100644 --- a/drivers/nvme/host/ioctl.c +++ b/drivers/nvme/host/ioctl.c @@ -396,13 +396,14 @@ static inline struct nvme_uring_cmd_pdu *nvme_uring_c= md_pdu( struct io_uring_cmd *ioucmd) { return io_uring_cmd_to_pdu(ioucmd, struct nvme_uring_cmd_pdu); } =20 -static void nvme_uring_task_cb(struct io_uring_cmd *ioucmd, - unsigned issue_flags) +static void nvme_uring_task_cb(struct io_tw_req tw_req, io_tw_token_t tw) { + unsigned int issue_flags =3D IO_URING_CMD_TASK_WORK_ISSUE_FLAGS; + struct io_uring_cmd *ioucmd =3D io_uring_cmd_from_tw(tw_req); struct nvme_uring_cmd_pdu *pdu =3D nvme_uring_cmd_pdu(ioucmd); =20 if (pdu->bio) blk_rq_unmap_user(pdu->bio); io_uring_cmd_done32(ioucmd, pdu->status, pdu->result, issue_flags); diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c index 185bef0df1c2..1936927ee6a4 100644 --- a/fs/btrfs/ioctl.c +++ b/fs/btrfs/ioctl.c @@ -4647,12 +4647,14 @@ struct btrfs_uring_priv { struct io_btrfs_cmd { struct btrfs_uring_encoded_data *data; struct btrfs_uring_priv *priv; }; =20 -static void btrfs_uring_read_finished(struct io_uring_cmd *cmd, unsigned i= nt issue_flags) +static void btrfs_uring_read_finished(struct io_tw_req tw_req, io_tw_token= _t tw) { + unsigned int issue_flags =3D IO_URING_CMD_TASK_WORK_ISSUE_FLAGS; + struct io_uring_cmd *cmd =3D io_uring_cmd_from_tw(tw_req); struct io_btrfs_cmd *bc =3D io_uring_cmd_to_pdu(cmd, struct io_btrfs_cmd); struct btrfs_uring_priv *priv =3D bc->priv; struct btrfs_inode *inode =3D BTRFS_I(file_inode(priv->iocb.ki_filp)); struct extent_io_tree *io_tree =3D &inode->io_tree; pgoff_t index; diff --git a/fs/fuse/dev_uring.c b/fs/fuse/dev_uring.c index 71b0c9662716..30923495e80f 100644 --- a/fs/fuse/dev_uring.c +++ b/fs/fuse/dev_uring.c @@ -1207,13 +1207,14 @@ static void fuse_uring_send(struct fuse_ring_ent *e= nt, struct io_uring_cmd *cmd, /* * This prepares and sends the ring request in fuse-uring task context. * User buffers are not mapped yet - the application does not have permiss= ion * to write to it - this has to be executed in ring task context. */ -static void fuse_uring_send_in_task(struct io_uring_cmd *cmd, - unsigned int issue_flags) +static void fuse_uring_send_in_task(struct io_tw_req tw_req, io_tw_token_t= tw) { + unsigned int issue_flags =3D IO_URING_CMD_TASK_WORK_ISSUE_FLAGS; + struct io_uring_cmd *cmd =3D io_uring_cmd_from_tw(tw_req); struct fuse_ring_ent *ent =3D uring_cmd_to_ring_ent(cmd); struct fuse_ring_queue *queue =3D ent->queue; int err; =20 if (!io_uring_cmd_should_terminate_tw(cmd)) { diff --git a/include/linux/io_uring/cmd.h b/include/linux/io_uring/cmd.h index b84b97c21b43..8e3322fb6fa5 100644 --- a/include/linux/io_uring/cmd.h +++ b/include/linux/io_uring/cmd.h @@ -9,21 +9,17 @@ /* only top 8 bits of sqe->uring_cmd_flags for kernel internal use */ #define IORING_URING_CMD_CANCELABLE (1U << 30) /* io_uring_cmd is being issued again */ #define IORING_URING_CMD_REISSUE (1U << 31) =20 -typedef void (*io_uring_cmd_tw_t)(struct io_uring_cmd *cmd, - unsigned issue_flags); - struct io_uring_cmd { struct file *file; const struct io_uring_sqe *sqe; - /* callback to defer completions to task context */ - io_uring_cmd_tw_t task_work_cb; u32 cmd_op; u32 flags; u8 pdu[32]; /* available inline for free use */ + u8 unused[8]; }; =20 static inline const void *io_uring_sqe_cmd(const struct io_uring_sqe *sqe) { return sqe->cmd; @@ -58,11 +54,11 @@ int io_uring_cmd_import_fixed_vec(struct io_uring_cmd *= ioucmd, */ void __io_uring_cmd_done(struct io_uring_cmd *cmd, s32 ret, u64 res2, unsigned issue_flags, bool is_cqe32); =20 void __io_uring_cmd_do_in_task(struct io_uring_cmd *ioucmd, - io_uring_cmd_tw_t task_work_cb, + io_req_tw_func_t task_work_cb, unsigned flags); =20 /* * Note: the caller should never hard code @issue_flags and only use the * mask provided by the core io_uring code. @@ -107,11 +103,11 @@ static inline int io_uring_cmd_import_fixed_vec(struc= t io_uring_cmd *ioucmd, static inline void __io_uring_cmd_done(struct io_uring_cmd *cmd, s32 ret, u64 ret2, unsigned issue_flags, bool is_cqe32) { } static inline void __io_uring_cmd_do_in_task(struct io_uring_cmd *ioucmd, - io_uring_cmd_tw_t task_work_cb, unsigned flags) + io_req_tw_func_t task_work_cb, unsigned flags) { } static inline void io_uring_cmd_mark_cancelable(struct io_uring_cmd *cmd, unsigned int issue_flags) { @@ -130,19 +126,27 @@ static inline bool io_uring_mshot_cmd_post_cqe(struct= io_uring_cmd *ioucmd, { return true; } #endif =20 +static inline struct io_uring_cmd *io_uring_cmd_from_tw(struct io_tw_req t= w_req) +{ + return io_kiocb_to_cmd(tw_req.req, struct io_uring_cmd); +} + +/* task_work executor checks the deferred list completion */ +#define IO_URING_CMD_TASK_WORK_ISSUE_FLAGS IO_URING_F_COMPLETE_DEFER + /* users must follow the IOU_F_TWQ_LAZY_WAKE semantics */ static inline void io_uring_cmd_do_in_task_lazy(struct io_uring_cmd *ioucm= d, - io_uring_cmd_tw_t task_work_cb) + io_req_tw_func_t task_work_cb) { __io_uring_cmd_do_in_task(ioucmd, task_work_cb, IOU_F_TWQ_LAZY_WAKE); } =20 static inline void io_uring_cmd_complete_in_task(struct io_uring_cmd *iouc= md, - io_uring_cmd_tw_t task_work_cb) + io_req_tw_func_t task_work_cb) { __io_uring_cmd_do_in_task(ioucmd, task_work_cb, 0); } =20 static inline bool io_uring_cmd_should_terminate_tw(struct io_uring_cmd *c= md) diff --git a/io_uring/uring_cmd.c b/io_uring/uring_cmd.c index fc2955e8caaf..5a80d35658dc 100644 --- a/io_uring/uring_cmd.c +++ b/io_uring/uring_cmd.c @@ -111,30 +111,20 @@ void io_uring_cmd_mark_cancelable(struct io_uring_cmd= *cmd, io_ring_submit_unlock(ctx, issue_flags); } } EXPORT_SYMBOL_GPL(io_uring_cmd_mark_cancelable); =20 -static void io_uring_cmd_work(struct io_tw_req tw_req, io_tw_token_t tw) -{ - struct io_kiocb *req =3D tw_req.req; - struct io_uring_cmd *ioucmd =3D io_kiocb_to_cmd(req, struct io_uring_cmd); - - /* task_work executor checks the deffered list completion */ - ioucmd->task_work_cb(ioucmd, IO_URING_F_COMPLETE_DEFER); -} - void __io_uring_cmd_do_in_task(struct io_uring_cmd *ioucmd, - io_uring_cmd_tw_t task_work_cb, + io_req_tw_func_t task_work_cb, unsigned flags) { struct io_kiocb *req =3D cmd_to_io_kiocb(ioucmd); =20 if (WARN_ON_ONCE(req->flags & REQ_F_APOLL_MULTISHOT)) return; =20 - ioucmd->task_work_cb =3D task_work_cb; - req->io_task_work.func =3D io_uring_cmd_work; + req->io_task_work.func =3D task_work_cb; __io_req_task_work_add(req, flags); } EXPORT_SYMBOL_GPL(__io_uring_cmd_do_in_task); =20 static inline void io_req_set_cqe32_extra(struct io_kiocb *req, --=20 2.45.2