From nobody Mon Dec 1 23:34:48 2025 Received: from mail-pl1-f227.google.com (mail-pl1-f227.google.com [209.85.214.227]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A36262F25FE for ; Tue, 25 Nov 2025 23:39:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.227 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764113983; cv=none; b=gkpP/cZSxdXHA6Qg4sMl39A/oM/3z61Aa++BQ7ewI3l6BVz7+60oF9GZiiIPYQP7F1rSkbUbLw2R/GNxdpiPk359P+INoHrOt3o2UmAwTCqeZkdL+Z6P+LIgoCeqRZ/8d+C+mxDlNpFUj3HBMBPbt9FcWHFCVJIwGyYthf2D4SQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764113983; c=relaxed/simple; bh=oPe4d1Dt/kF8ouGFjdvfRNnkBvKL4QPie4rpDGptmqs=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Od7eZnYCVDyWlymV0IxAvT0hPUrx1HUnBuMUYK1F4w2aQFTDZLB5nb2YovJPKr9LFQoJbJEL9ue4MxLPICfOTMhvOFj4Qfyn4FwTGUc7o2xoMyoQWF8VxYNfOkbsjwyXrRGODtNAY+YCS5IU+WMqfiINn6ZdalVarpc4440gt3w= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=purestorage.com; spf=fail smtp.mailfrom=purestorage.com; dkim=pass (2048-bit key) header.d=purestorage.com header.i=@purestorage.com header.b=HPoyD+Gj; arc=none smtp.client-ip=209.85.214.227 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=purestorage.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=purestorage.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=purestorage.com header.i=@purestorage.com header.b="HPoyD+Gj" Received: by mail-pl1-f227.google.com with SMTP id d9443c01a7336-297ea936b39so11386955ad.1 for ; Tue, 25 Nov 2025 15:39:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=purestorage.com; s=google2022; t=1764113981; x=1764718781; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=oVcZ1kjlWKJeBcL9Jbm6axJm3ukOLpidER5l9zdbGNM=; b=HPoyD+Gj9Wx3+XeE+UvYoDA3BuWm7JFY6qqpMQEOfyzcfRanKgXpzDa5ShQHQJr8eP vB8KpwWkvkA6M1eAJmoS7plC9o5xsVPFCIPILa+gpHvBI7seeIvRNIGONupu5/Y3nhEw WuPhlQtIOBIWA7CI4HtThy7BTF+8jtSMwC0/b/3VGTdfH6O72FMZPZbMpT95Z0YmDlmR jQlYjJpPfnv6+Bx5EiQZxxFs8Ez7agMlBOHLgdiLcBeP7qVqjS4eykUT0b4RJFwrPvD9 F136HTMrj8Wxtv490i+9OXiS4mpjfpMC0hskaQco1EEJnvprHMXOj9VVO4NFU+n17jEf yKnA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1764113981; x=1764718781; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=oVcZ1kjlWKJeBcL9Jbm6axJm3ukOLpidER5l9zdbGNM=; b=uEOtrglowiuHEDYVWrCVGJNlfKrttRKcW43TUiHvxlfFLgHB1C1NXVsmK7a2US7YBW FLfa5DXEMs57qU+ZgP5uAd17WHpT1u3A237RVNaP9074M7+6uqJJ/7VdNS71So0MlePB fu/sIZYNQhzYEMvKjX/UMKtIlcaffTQyU22/GEGxe+inf3xeVZX21mUAhE5LYn041zyC kiBX4FnKvd/svAZW+jsOGoLOurWFnbeoDHXXtsMkV+clIoTNyN8CsvDQD/4G6N3q6AxD Qo3/fjNo049aqfsUVd+Ga1EBr68q6VcLBorFyXne+6ZEzpMAM5uMNe8ZK4G02ulvnEVR C93w== X-Forwarded-Encrypted: i=1; AJvYcCUe6xHhkqD0o+uxkQJ5S4tCN3lCU5wKWda8Qc12rUwx7OcoZxAWtnZp3JMqA/OyWLS3iv9UuzHfHPM9wcY=@vger.kernel.org X-Gm-Message-State: AOJu0Yy7kVvRo+ObUpBaiX/NLpXjikm514NoO3noN+PNDYbQmbRGhAf4 JRXNZNxCwrPyUqJKjftnYnm7a9N8gUpILA53VzdRqSSUGqutrcJt5G3lYZoz2Q2VRmwKBw0GOyD 0l/T4lLd26PvFiqRdRRhtkQJG194r5TfX47bL X-Gm-Gg: ASbGncs8eSIWRQsva0n7awBH2aM2iLo98JN6NlTLD9NRuQQhQUf2kh27MK8NoGtXP6Z wSokAMm531FuVQZHAMmjJ6r5H6kxxuy9ZCTdRpTpShTseAWsmEIpjQs7KIg08LVmblcI7GDpRKn VR16XXNj+P0TZPAqkp1z1vKQ2d9cBLdocuZqBPGEnZ3ckPIcdmBbks/G2YLxOI+Dm3qwQdxt4/X UtkFnTo5CTGWCt2XdGUfjkByVSTacdRLBFmOA3RrqKRBbI+lN4pvOHaUk4q4g6i3ooOL+mL1ek9 BbtYz13PDJeN1Q+LYJIV/93f8fsYxwl8WQqTbu6fyqiz5mlvEb2Uu5VAXv3stbhoBOVc6mh5hUi //8dVetkmOnbvZksgNagK2ncKO8cPZbpVL+r7u0a+ig== X-Google-Smtp-Source: AGHT+IHC6KWWpHlL/oKY1CixtwVh6OZTqTPIxixKTx2gsuf5v2kwMO7SEEwbn2xVUTsk5aIP+iAbsXidd/38 X-Received: by 2002:a17:903:388d:b0:299:db45:c5a9 with SMTP id d9443c01a7336-29b702764c5mr95171425ad.9.1764113980973; Tue, 25 Nov 2025 15:39:40 -0800 (PST) Received: from c7-smtp-2023.dev.purestorage.com ([208.88.159.128]) by smtp-relay.gmail.com with ESMTPS id d9443c01a7336-29b5b2420a7sm21392035ad.52.2025.11.25.15.39.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 25 Nov 2025 15:39:40 -0800 (PST) X-Relaying-Domain: purestorage.com Received: from dev-csander.dev.purestorage.com (dev-csander.dev.purestorage.com [10.7.70.37]) by c7-smtp-2023.dev.purestorage.com (Postfix) with ESMTP id 7C3D33400B6; Tue, 25 Nov 2025 16:39:40 -0700 (MST) Received: by dev-csander.dev.purestorage.com (Postfix, from userid 1557716354) id 74912E41EF2; Tue, 25 Nov 2025 16:39:39 -0700 (MST) From: Caleb Sander Mateos To: Jens Axboe Cc: io-uring@vger.kernel.org, linux-kernel@vger.kernel.org, Caleb Sander Mateos Subject: [PATCH v3 1/4] io_uring: clear IORING_SETUP_SINGLE_ISSUER for IORING_SETUP_SQPOLL Date: Tue, 25 Nov 2025 16:39:25 -0700 Message-ID: <20251125233928.3962947-2-csander@purestorage.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20251125233928.3962947-1-csander@purestorage.com> References: <20251125233928.3962947-1-csander@purestorage.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" IORING_SETUP_SINGLE_ISSUER doesn't currently enable any optimizations, but it will soon be used to avoid taking io_ring_ctx's uring_lock when submitting from the single issuer task. If the IORING_SETUP_SQPOLL flag is set, the SQ thread is the sole task issuing SQEs. However, other tasks may make io_uring_register() syscalls, which must be synchronized with SQE submission. So it wouldn't be safe to skip the uring_lock around the SQ thread's submission even if IORING_SETUP_SINGLE_ISSUER is set. Therefore, clear IORING_SETUP_SINGLE_ISSUER from the io_ring_ctx flags if IORING_SETUP_SQPOLL is set. Signed-off-by: Caleb Sander Mateos --- io_uring/io_uring.c | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c index 1e58fc1d5667..05a1ac457581 100644 --- a/io_uring/io_uring.c +++ b/io_uring/io_uring.c @@ -3473,10 +3473,19 @@ static int io_uring_sanitise_params(struct io_uring= _params *p) */ if ((flags & (IORING_SETUP_SQE128|IORING_SETUP_SQE_MIXED)) =3D=3D (IORING_SETUP_SQE128|IORING_SETUP_SQE_MIXED)) return -EINVAL; =20 + /* + * If IORING_SETUP_SQPOLL is set, only the SQ thread issues SQEs, + * but other threads may call io_uring_register() concurrently. + * We still need uring_lock to synchronize these io_ring_ctx accesses, + * so disable the single issuer optimizations. + */ + if (flags & IORING_SETUP_SQPOLL) + p->flags &=3D ~IORING_SETUP_SINGLE_ISSUER; + return 0; } =20 static int io_uring_fill_params(struct io_uring_params *p) { --=20 2.45.2 From nobody Mon Dec 1 23:34:48 2025 Received: from mail-il1-f226.google.com (mail-il1-f226.google.com [209.85.166.226]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9EABB2F25E8 for ; Tue, 25 Nov 2025 23:39:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.166.226 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764113983; cv=none; b=EFPrruan50+xYZa54mURNZPTVxDef0cPypTD9Db1j0b3DeXGfO3eVZNT4lEEA8e6W71YZpTB9h3A0eWCT6DUdU45B1w0XEZtgAlHUleKqUEZeaMqNymRLNapcN3Sz6YpLQZXC+mPbtvneXNFKUyH28l97GzVavBe9TGHU53EE+E= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764113983; c=relaxed/simple; bh=mgPhKwAnmtPP9Wj/BsLLAsVvewiMBMYtCRp0h5FCiVQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=nlOhRP42+yl4L1Gslfk0HmJg+zThT/hEMufvlEQx8SO9T9lGo0N1ZAv38M3nsoirZUpZmGc0LaL9qWnf/DnsR5aWPkcVN3Zg/Azwwl8iT10gRrOj+XfEmQwRgLWH8gdXCmQ97ZPi2vfglkR+Sv2fKw06ihNsQq4poMCjEZVe1gU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=purestorage.com; spf=fail smtp.mailfrom=purestorage.com; dkim=pass (2048-bit key) header.d=purestorage.com header.i=@purestorage.com header.b=KZscNG0l; arc=none smtp.client-ip=209.85.166.226 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=purestorage.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=purestorage.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=purestorage.com header.i=@purestorage.com header.b="KZscNG0l" Received: by mail-il1-f226.google.com with SMTP id e9e14a558f8ab-43326c45389so2516635ab.1 for ; Tue, 25 Nov 2025 15:39:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=purestorage.com; s=google2022; t=1764113981; x=1764718781; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=KFHAjnVVu+TK64nKRWtxECUF1AnC7SKDeYJZioBUuxI=; b=KZscNG0lYtIfBR6CMm/GV4sJy2rcQKMKcUCixG00wSN7no6vLAHLkeMkxtuEQP8UAE afD1W5dU8oKmdep0bBnQRNp6dGMvoHOP8KMxiQRuIe0Da4MQ1+cpZQGy6zL1VwWSaEL9 Sz1RUpTkcT3ClUWKy/wPla5P+Lrjh9NikfMz6ctwpwQ18TcSX7YdgYZ5onEpdV74btDx wRXfThDp2dhcd5fj3wuVr2ohcBb0A1iELWds3CwwYerEjC3z12D/kBuaS8qanpL+MtaV nP1WnhvkjgkyC0gRFXwps/RrJZf6hceXUYOAj647++coQ0ZLGL7xPDFzkgGupMthSVc4 6YZQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1764113981; x=1764718781; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=KFHAjnVVu+TK64nKRWtxECUF1AnC7SKDeYJZioBUuxI=; b=W0jFgAVVZBw5dq5VM/0VzusUgIy13K0qkHL6+qhyYJ1oQFhjxof+dPbkHUp9gggCWe o2wFgvdGFXiUYdkvl+U7XVj6ey2ORCrK7yOA3LYgA9UNPeIK+pOymQDgME+1PI9AOpE5 bD3TifxRlR2jkPMzwkncI05ddDJMH39/lI8v86rDMfKFZ9XWmvyA5BccwkaXCa2Mb1G3 6AVnx1cZrGbcERg7RypDyA47moz4l1v4A87CqXyK4q2VKb2T7Ej4+WT9/5UajGz0wESJ N4s2gY81PhUaGx5Nm3R8QyXI84MT81OZnip2sF4kyQ9PEphCZKycXSddeWoa047D9aHb ktYw== X-Forwarded-Encrypted: i=1; AJvYcCU/4fO8uWJtcdIEdQPLzgwr5IvWd2u77yCb//3r+jcCcdWDgH08+ZR4d9eXb3adhH5lmhVAHuqBEDyJKX0=@vger.kernel.org X-Gm-Message-State: AOJu0Ywqx36CKDCHZg6PauYwnDx8LR6BJwkTTsaPzF5Zh/X3eGtUsfps VHmQdf+DStQEfkAUZySzGJNcGvI/QMAJDD0ngzlu+AQnBgbUo5zWt2OHO9H64TgyOKXzDfJ4V08 i2gstZIm1irIEdQYjls+t7gQXPcAWZCW38yArHecffIEi+FK840KR X-Gm-Gg: ASbGncv1YcQOn1WHBdRQ36YTEbK4l324/6bD09GLs0O2IG4FSGn54ukDkoPP6FSdfjq X2T0JxSoaHK4lUfenNXP+1OOxfURmq4UJwRjwH2jhIberJG94gaKfWd9WqlkCpbbTgAjkfFlfa/ WNVXYKt+DuOgbCxRSiMQHmGnU6ndMAUJ8SS1Dpkix5ZyCJI1LiiuCs1HbWrDDs6CeOZOELMcNsP piZRlcQBtPJuLwobvnakX2vnq24NjLA5f4vDhBQo4bkNv0eumLUzGRsreYzviRt0QPT3Kh+9Rmf D6n9DEtuGAm9JPvHxUY8FiR37+E+pvf8G3EWxJMZqpDgz3RfpwnNfxwN8JVsnYxOTPWBCitF3TW u7qDtybnbOtXAFW0J4qctjHSWrfQ= X-Google-Smtp-Source: AGHT+IH/tX/rq3Q2X3Eh+XOimgjJ0hdijdnwQpq75jxvf3kyffpFKnT9Ejd21O5fcbveGv5SUPSnmPLKoauH X-Received: by 2002:a05:6e02:188e:b0:433:2fe2:9d41 with SMTP id e9e14a558f8ab-435bc9beca3mr60278395ab.7.1764113980755; Tue, 25 Nov 2025 15:39:40 -0800 (PST) Received: from c7-smtp-2023.dev.purestorage.com ([2620:125:9017:12:36:3:5:0]) by smtp-relay.gmail.com with ESMTPS id e9e14a558f8ab-435a90afcc0sm17207965ab.23.2025.11.25.15.39.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 25 Nov 2025 15:39:40 -0800 (PST) X-Relaying-Domain: purestorage.com Received: from dev-csander.dev.purestorage.com (unknown [IPv6:2620:125:9007:640:ffff::1199]) by c7-smtp-2023.dev.purestorage.com (Postfix) with ESMTP id 0E9C33400AF; Tue, 25 Nov 2025 16:39:40 -0700 (MST) Received: by dev-csander.dev.purestorage.com (Postfix, from userid 1557716354) id 0AEAFE42207; Tue, 25 Nov 2025 16:39:40 -0700 (MST) From: Caleb Sander Mateos To: Jens Axboe Cc: io-uring@vger.kernel.org, linux-kernel@vger.kernel.org, Caleb Sander Mateos Subject: [PATCH v3 2/4] io_uring: use io_ring_submit_lock() in io_iopoll_req_issued() Date: Tue, 25 Nov 2025 16:39:26 -0700 Message-ID: <20251125233928.3962947-3-csander@purestorage.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20251125233928.3962947-1-csander@purestorage.com> References: <20251125233928.3962947-1-csander@purestorage.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Use the io_ring_submit_lock() helper in io_iopoll_req_issued() instead of open-coding the logic. io_ring_submit_unlock() can't be used for the unlock, though, due to the extra logic before releasing the mutex. Signed-off-by: Caleb Sander Mateos --- io_uring/io_uring.c | 6 ++---- 1 file changed, 2 insertions(+), 4 deletions(-) diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c index 05a1ac457581..d7aaa6e4bfe4 100644 --- a/io_uring/io_uring.c +++ b/io_uring/io_uring.c @@ -1668,15 +1668,13 @@ void io_req_task_complete(struct io_tw_req tw_req, = io_tw_token_t tw) * accessing the kiocb cookie. */ static void io_iopoll_req_issued(struct io_kiocb *req, unsigned int issue_= flags) { struct io_ring_ctx *ctx =3D req->ctx; - const bool needs_lock =3D issue_flags & IO_URING_F_UNLOCKED; =20 /* workqueue context doesn't hold uring_lock, grab it now */ - if (unlikely(needs_lock)) - mutex_lock(&ctx->uring_lock); + io_ring_submit_lock(ctx, issue_flags); =20 /* * Track whether we have multiple files in our lists. This will impact * how we do polling eventually, not spinning if we're on potentially * different devices. @@ -1699,11 +1697,11 @@ static void io_iopoll_req_issued(struct io_kiocb *r= eq, unsigned int issue_flags) if (READ_ONCE(req->iopoll_completed)) wq_list_add_head(&req->comp_list, &ctx->iopoll_list); else wq_list_add_tail(&req->comp_list, &ctx->iopoll_list); =20 - if (unlikely(needs_lock)) { + if (unlikely(issue_flags & IO_URING_F_UNLOCKED)) { /* * If IORING_SETUP_SQPOLL is enabled, sqes are either handle * in sq thread task context or in io worker task context. If * current task context is sq thread, we don't need to check * whether should wake up sq thread. --=20 2.45.2 From nobody Mon Dec 1 23:34:48 2025 Received: from mail-pj1-f99.google.com (mail-pj1-f99.google.com [209.85.216.99]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 546E12F25EF for ; Tue, 25 Nov 2025 23:39:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.99 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764113988; cv=none; b=dXiltws7IrFz6leX81MxQyWmVkgJ+uFC8GF6IUITp0TYIbXZCkQz+ilUupy6cubPdhnqTmUYVvGE9mvdVTU5BHUhqv8jMzoLWMObxEpXjtAY/q//DsbYPrqcAMfI1ObXskf22GX+lFXZttvJK8ljjwpcPbhyAVh0qHeF/RJUXG0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764113988; c=relaxed/simple; bh=zzSnmJEub77QXmuVB0ktRkpeRZUFxtBXKb7u39w7wIc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Ul0EJfQMnwJf/RL82xsx9zX+vuNe+dn0aYcIIthPMmVVdxKiqZEJxaG7FF8cf08lm7cleK8C/gOPEs0eQUvwMhjZi5m+/A2mBOiyIgtZzg2KtEW6RABi6XcqwCUOJT6ImtKPHGKUk6OV8RfpF/LLUtymeCITztSV4dV/SBLatE0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=purestorage.com; spf=fail smtp.mailfrom=purestorage.com; dkim=pass (2048-bit key) header.d=purestorage.com header.i=@purestorage.com header.b=Ro57Z19t; arc=none smtp.client-ip=209.85.216.99 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=purestorage.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=purestorage.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=purestorage.com header.i=@purestorage.com header.b="Ro57Z19t" Received: by mail-pj1-f99.google.com with SMTP id 98e67ed59e1d1-3436d9da02eso757065a91.1 for ; Tue, 25 Nov 2025 15:39:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=purestorage.com; s=google2022; t=1764113981; x=1764718781; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=H+ywcbrfo4sjCpAz5caosn8SI3IUkTEI5mPINT/19bw=; b=Ro57Z19ttFDgRNCdm4zc6blozlsmRiqXt3AZLf//sBgROYU5tV7Vo4ijknEZeEkVLd ceP6Sc9VcvWwP1KBQ8EpeHG5IxUk4WhcyMPKxCiApVHN0rLv/tjyHLFieY5WwoPfBXdH CO/8UZXEWwuQDkhkYDLLL/XvWJLmHwbHxBWxUWO6fC3tuqyhM4AX29p2stu6gU/3hGGx F37qWWiIAX/RhcuETJjE1mZI7zEgxTJURIilUfIFBGAoiLOEuEb57jYQYTOPfPNRVmUY HSj7l3bRmsO4Bdwh4ETNf9Vfa+eLt6Td/oJ2buN653LZ2SkzMJt3CwTBt/OeVOHmkLHc XB1g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1764113981; x=1764718781; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=H+ywcbrfo4sjCpAz5caosn8SI3IUkTEI5mPINT/19bw=; b=QeAfwCKPAX9HFFtNzj/L1V8xBZlsouW/q6f3TWcFnHIjGRsueRoUOZEZAqauE2e9td 0Z4TKClPX8NGMHZDwwaB5GAB6Rl+zXapKJAc2uJmoJeJuTs+8jWaIH7HOsQEYCqnTF2H v0YazVEL62AtZojATBK3omsNQCIfG4ugeWT4HoQT/0kOEslwQHk7w920eri680Cxrm6p YlUgC6cXA2EM0J/VlYr6XHrfUh0qjS+4CMk/uN4xKu+/koApMVcyrEMk7jT9mS0BuMfn fnUS/JrudaS2ErENCfKV58789gNj8FkOeOBV5aE2pWnultkYGSNkm0JXstb8u7RmBpCq zvKA== X-Forwarded-Encrypted: i=1; AJvYcCUgayR7UI68OEH41XNkp2PO9KHnBr22HCzzpbTmDZBWeAqTr5jM84HEBuvBaLRhEei9OK+pYoRWSWxgSzA=@vger.kernel.org X-Gm-Message-State: AOJu0Yy+Jqfa1xDNdvEgBWduhC4ERSP0mg+h0IDOlqfuOgthj6S6JXaf zzxGoTl/7drG1FJkLWrFzKhFzWfe7hf0zMESAlaoGTQzu3qmegKefimsGuHfIkPjIIM/ZD8SSCM 6KbSkWXq/HUxzS+tp1nQtGHSU4Biw2tOym2kZ X-Gm-Gg: ASbGncuE5hITz3f9T6uJy0atLoP7mwrDF0kuFlfuohfiXdm5t9Ympd99QfkoRzJkxFi 22jJukrr3ipWZ5ZlX6Nyg0owDnu32YPFTCh+5z7HIs5+Spg7mS5S7mnxnDDrQl3YS8qcvdvEVbb n29uDeCNF0rWkNB5vJEG83rWP0fiUk9A4uNEBYkybwD19ZOjqQYTnujFUhgxUPY7gygTdT3rmp6 OGNUKuvQwGx/bW9bgxKFnC1GrzntoBRWhFGMWUbre7EXfW7d0jTKZH+FgedNM9pEA/Gm27o4WBN zAYdaUKou3MWOpXGcZR9cYoztUYQROpAFN3UHhTmlgs3eDPK7DT0uqwoCQWy2CxWaE/zk45vR0z LK9d4WQ6QiGqSzeNZPfFqkwkgJKMUn8l97lOhYh7H5g== X-Google-Smtp-Source: AGHT+IHAJ6SfmLkSspJpXWCu6pb+0LseTsaErcP1cQ1yalrt1QxRX8WtTLeZ3GVoNxGAZWnHn3c2VamJcQPi X-Received: by 2002:a05:6a20:6a26:b0:35d:3b70:762c with SMTP id adf61e73a8af0-3614eb3a8c6mr11619479637.2.1764113981176; Tue, 25 Nov 2025 15:39:41 -0800 (PST) Received: from c7-smtp-2023.dev.purestorage.com ([208.88.159.128]) by smtp-relay.gmail.com with ESMTPS id 41be03b00d2f7-bd75dfedd7fsm1000974a12.1.2025.11.25.15.39.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 25 Nov 2025 15:39:41 -0800 (PST) X-Relaying-Domain: purestorage.com Received: from dev-csander.dev.purestorage.com (dev-csander.dev.purestorage.com [10.7.70.37]) by c7-smtp-2023.dev.purestorage.com (Postfix) with ESMTP id 85550340185; Tue, 25 Nov 2025 16:39:40 -0700 (MST) Received: by dev-csander.dev.purestorage.com (Postfix, from userid 1557716354) id 83759E41EF2; Tue, 25 Nov 2025 16:39:40 -0700 (MST) From: Caleb Sander Mateos To: Jens Axboe Cc: io-uring@vger.kernel.org, linux-kernel@vger.kernel.org, Caleb Sander Mateos Subject: [PATCH v3 3/4] io_uring: factor out uring_lock helpers Date: Tue, 25 Nov 2025 16:39:27 -0700 Message-ID: <20251125233928.3962947-4-csander@purestorage.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20251125233928.3962947-1-csander@purestorage.com> References: <20251125233928.3962947-1-csander@purestorage.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" A subsequent commit will skip acquiring the io_ring_ctx uring_lock in io_uring_enter() and io_handle_tw_list() for IORING_SETUP_SINGLE_ISSUER. Prepare for this change by factoring out the uring_lock accesses under these functions into helpers. A separate set of helpers is provided for the io_uring_register() syscall, as it will need a different locking mechanism in IORING_SETUP_SINGLE_ISSUER mode. Aside from the helpers, the only remaining access of uring_lock is its mutex_init() call. Define a struct io_ring_ctx_lock_state to pass state from io_ring_ctx_lock() to io_ring_ctx_unlock(). It's currently empty but a subsequent commit will add fields. Non-io_uring_register() helpers: - io_ring_ctx_lock() for mutex_lock(&ctx->uring_lock) - io_ring_ctx_trylock() for mutex_trylock(&ctx->uring_lock) - io_ring_ctx_unlock() for mutex_unlock(&ctx->uring_lock) - io_ring_ctx_assert_locked() for lockdep_assert_held(&ctx->uring_lock) io_uring_register() helpers: - io_ring_register_ctx_lock() for mutex_lock(&ctx->uring_lock) - io_ring_register_ctx_lock_nested() for mutex_lock_nested(&ctx->uring_lock, ...) - io_ring_register_ctx_unlock() for mutex_unlock(&ctx->uring_lock) - io_ring_register_ctx_lock_held() for lockdep_is_held(&ctx->uring_lock) - io_ring_register_ctx_assert_locked() for lockdep_assert_held(&ctx->uring_lock) - must_hold_io_ring_register_ctx_lock for __must_hold(&ctx->uring_lock) - releases_io_ring_register_ctx_lock for __releases(ctx->uring_lock) - acquires_io_ring_register_ctx_lock for __acquires(ctx->uring_lock) Signed-off-by: Caleb Sander Mateos --- include/linux/io_uring_types.h | 12 +-- io_uring/cancel.c | 36 ++++--- io_uring/eventfd.c | 5 +- io_uring/fdinfo.c | 6 +- io_uring/filetable.c | 8 +- io_uring/futex.c | 14 +-- io_uring/io_uring.c | 185 +++++++++++++++++++-------------- io_uring/io_uring.h | 109 ++++++++++++++++--- io_uring/kbuf.c | 38 ++++--- io_uring/memmap.h | 2 +- io_uring/msg_ring.c | 29 ++++-- io_uring/notif.c | 5 +- io_uring/notif.h | 3 +- io_uring/openclose.c | 14 +-- io_uring/poll.c | 21 ++-- io_uring/register.c | 34 +++--- io_uring/rsrc.c | 37 ++++--- io_uring/rsrc.h | 3 +- io_uring/rw.c | 2 +- io_uring/splice.c | 5 +- io_uring/sqpoll.c | 5 +- io_uring/tctx.c | 24 +++-- io_uring/uring_cmd.c | 13 ++- io_uring/waitid.c | 13 +-- io_uring/zcrx.c | 2 +- 25 files changed, 392 insertions(+), 233 deletions(-) diff --git a/include/linux/io_uring_types.h b/include/linux/io_uring_types.h index e1adb0d20a0a..74d202394b20 100644 --- a/include/linux/io_uring_types.h +++ b/include/linux/io_uring_types.h @@ -86,11 +86,11 @@ struct io_mapped_region { =20 /* * Return value from io_buffer_list selection, to avoid stashing it in * struct io_kiocb. For legacy/classic provided buffers, keeping a referen= ce * across execution contexts are fine. But for ring provided buffers, the - * list may go away as soon as ->uring_lock is dropped. As the io_kiocb + * list may go away as soon as the ctx uring lock is dropped. As the io_ki= ocb * persists, it's better to just keep the buffer local for those cases. */ struct io_br_sel { struct io_buffer_list *buf_list; /* @@ -231,11 +231,11 @@ struct io_submit_link { struct io_kiocb *head; struct io_kiocb *last; }; =20 struct io_submit_state { - /* inline/task_work completion list, under ->uring_lock */ + /* inline/task_work completion list, under ctx uring lock */ struct io_wq_work_node free_list; /* batch completion logic */ struct io_wq_work_list compl_reqs; struct io_submit_link link; =20 @@ -303,16 +303,16 @@ struct io_ring_ctx { unsigned cached_sq_head; unsigned sq_entries; =20 /* * Fixed resources fast path, should be accessed only under - * uring_lock, and updated through io_uring_register(2) + * ctx uring lock, and updated through io_uring_register(2) */ atomic_t cancel_seq; =20 /* - * ->iopoll_list is protected by the ctx->uring_lock for + * ->iopoll_list is protected by the ctx uring lock for * io_uring instances that don't use IORING_SETUP_SQPOLL. * For SQPOLL, only the single threaded io_sq_thread() will * manipulate the list, hence no extra locking is needed there. */ bool poll_multi_queue; @@ -324,11 +324,11 @@ struct io_ring_ctx { struct io_alloc_cache imu_cache; =20 struct io_submit_state submit_state; =20 /* - * Modifications are protected by ->uring_lock and ->mmap_lock. + * Modifications protected by ctx uring lock and ->mmap_lock. * The buffer list's io mapped region should be stable once * published. */ struct xarray io_bl_xa; =20 @@ -467,11 +467,11 @@ struct io_ring_ctx { struct io_mapped_region param_region; }; =20 /* * Token indicating function is called in task work context: - * ctx->uring_lock is held and any completions generated will be flushed. + * ctx uring lock is held and any completions generated will be flushed. * ONLY core io_uring.c should instantiate this struct. */ struct io_tw_state { bool cancel; }; diff --git a/io_uring/cancel.c b/io_uring/cancel.c index ca12ac10c0ae..54ca7f15cb7b 100644 --- a/io_uring/cancel.c +++ b/io_uring/cancel.c @@ -168,10 +168,11 @@ int io_async_cancel_prep(struct io_kiocb *req, const = struct io_uring_sqe *sqe) static int __io_async_cancel(struct io_cancel_data *cd, struct io_uring_task *tctx, unsigned int issue_flags) { bool all =3D cd->flags & (IORING_ASYNC_CANCEL_ALL|IORING_ASYNC_CANCEL_ANY= ); + struct io_ring_ctx_lock_state lock_state; struct io_ring_ctx *ctx =3D cd->ctx; struct io_tctx_node *node; int ret, nr =3D 0; =20 do { @@ -182,21 +183,21 @@ static int __io_async_cancel(struct io_cancel_data *c= d, return ret; nr++; } while (1); =20 /* slow path, try all io-wq's */ - io_ring_submit_lock(ctx, issue_flags); + io_ring_submit_lock(ctx, issue_flags, &lock_state); ret =3D -ENOENT; list_for_each_entry(node, &ctx->tctx_list, ctx_node) { ret =3D io_async_cancel_one(node->task->io_uring, cd); if (ret !=3D -ENOENT) { if (!all) break; nr++; } } - io_ring_submit_unlock(ctx, issue_flags); + io_ring_submit_unlock(ctx, issue_flags, &lock_state); return all ? nr : ret; } =20 int io_async_cancel(struct io_kiocb *req, unsigned int issue_flags) { @@ -238,11 +239,11 @@ int io_async_cancel(struct io_kiocb *req, unsigned in= t issue_flags) static int __io_sync_cancel(struct io_uring_task *tctx, struct io_cancel_data *cd, int fd) { struct io_ring_ctx *ctx =3D cd->ctx; =20 - /* fixed must be grabbed every time since we drop the uring_lock */ + /* fixed must be grabbed every time since we drop the ctx uring lock */ if ((cd->flags & IORING_ASYNC_CANCEL_FD) && (cd->flags & IORING_ASYNC_CANCEL_FD_FIXED)) { struct io_rsrc_node *node; =20 node =3D io_rsrc_node_lookup(&ctx->file_table.data, fd); @@ -255,11 +256,11 @@ static int __io_sync_cancel(struct io_uring_task *tct= x, =20 return __io_async_cancel(cd, tctx, 0); } =20 int io_sync_cancel(struct io_ring_ctx *ctx, void __user *arg) - __must_hold(&ctx->uring_lock) + must_hold_io_ring_register_ctx_lock(ctx) { struct io_cancel_data cd =3D { .ctx =3D ctx, .seq =3D atomic_inc_return(&ctx->cancel_seq), }; @@ -317,11 +318,11 @@ int io_sync_cancel(struct io_ring_ctx *ctx, void __us= er *arg) =20 prepare_to_wait(&ctx->cq_wait, &wait, TASK_INTERRUPTIBLE); =20 ret =3D __io_sync_cancel(current->io_uring, &cd, sc.fd); =20 - mutex_unlock(&ctx->uring_lock); + io_ring_register_ctx_unlock(ctx); if (ret !=3D -EALREADY) break; =20 ret =3D io_run_task_work_sig(ctx); if (ret < 0) @@ -329,15 +330,15 @@ int io_sync_cancel(struct io_ring_ctx *ctx, void __us= er *arg) ret =3D schedule_hrtimeout(&timeout, HRTIMER_MODE_ABS); if (!ret) { ret =3D -ETIME; break; } - mutex_lock(&ctx->uring_lock); + io_ring_register_ctx_lock(ctx); } while (1); =20 finish_wait(&ctx->cq_wait, &wait); - mutex_lock(&ctx->uring_lock); + io_ring_register_ctx_lock(ctx); =20 if (ret =3D=3D -ENOENT || ret > 0) ret =3D 0; out: if (file) @@ -351,11 +352,11 @@ bool io_cancel_remove_all(struct io_ring_ctx *ctx, st= ruct io_uring_task *tctx, { struct hlist_node *tmp; struct io_kiocb *req; bool found =3D false; =20 - lockdep_assert_held(&ctx->uring_lock); + io_ring_ctx_assert_locked(ctx); =20 hlist_for_each_entry_safe(req, tmp, list, hash_node) { if (!io_match_task_safe(req, tctx, cancel_all)) continue; hlist_del_init(&req->hash_node); @@ -368,24 +369,25 @@ bool io_cancel_remove_all(struct io_ring_ctx *ctx, st= ruct io_uring_task *tctx, =20 int io_cancel_remove(struct io_ring_ctx *ctx, struct io_cancel_data *cd, unsigned int issue_flags, struct hlist_head *list, bool (*cancel)(struct io_kiocb *)) { + struct io_ring_ctx_lock_state lock_state; struct hlist_node *tmp; struct io_kiocb *req; int nr =3D 0; =20 - io_ring_submit_lock(ctx, issue_flags); + io_ring_submit_lock(ctx, issue_flags, &lock_state); hlist_for_each_entry_safe(req, tmp, list, hash_node) { if (!io_cancel_req_match(req, cd)) continue; if (cancel(req)) nr++; if (!(cd->flags & IORING_ASYNC_CANCEL_ALL)) break; } - io_ring_submit_unlock(ctx, issue_flags); + io_ring_submit_unlock(ctx, issue_flags, &lock_state); return nr ?: -ENOENT; } =20 static bool io_match_linked(struct io_kiocb *head) { @@ -477,37 +479,39 @@ __cold bool io_cancel_ctx_cb(struct io_wq_work *work,= void *data) return req->ctx =3D=3D data; } =20 static __cold bool io_uring_try_cancel_iowq(struct io_ring_ctx *ctx) { + struct io_ring_ctx_lock_state lock_state; struct io_tctx_node *node; enum io_wq_cancel cret; bool ret =3D false; =20 - mutex_lock(&ctx->uring_lock); + io_ring_ctx_lock(ctx, &lock_state); list_for_each_entry(node, &ctx->tctx_list, ctx_node) { struct io_uring_task *tctx =3D node->task->io_uring; =20 /* - * io_wq will stay alive while we hold uring_lock, because it's - * killed after ctx nodes, which requires to take the lock. + * io_wq will stay alive while we hold ctx uring lock, because + * it's killed after ctx nodes, which requires to take the lock. */ if (!tctx || !tctx->io_wq) continue; cret =3D io_wq_cancel_cb(tctx->io_wq, io_cancel_ctx_cb, ctx, true); ret |=3D (cret !=3D IO_WQ_CANCEL_NOTFOUND); } - mutex_unlock(&ctx->uring_lock); + io_ring_ctx_unlock(ctx, &lock_state); =20 return ret; } =20 __cold bool io_uring_try_cancel_requests(struct io_ring_ctx *ctx, struct io_uring_task *tctx, bool cancel_all, bool is_sqpoll_thread) { struct io_task_cancel cancel =3D { .tctx =3D tctx, .all =3D cancel_all, }; + struct io_ring_ctx_lock_state lock_state; enum io_wq_cancel cret; bool ret =3D false; =20 /* set it so io_req_local_work_add() would wake us up */ if (ctx->flags & IORING_SETUP_DEFER_TASKRUN) { @@ -542,17 +546,17 @@ __cold bool io_uring_try_cancel_requests(struct io_ri= ng_ctx *ctx, } =20 if ((ctx->flags & IORING_SETUP_DEFER_TASKRUN) && io_allowed_defer_tw_run(ctx)) ret |=3D io_run_local_work(ctx, INT_MAX, INT_MAX) > 0; - mutex_lock(&ctx->uring_lock); + io_ring_ctx_lock(ctx, &lock_state); ret |=3D io_cancel_defer_files(ctx, tctx, cancel_all); ret |=3D io_poll_remove_all(ctx, tctx, cancel_all); ret |=3D io_waitid_remove_all(ctx, tctx, cancel_all); ret |=3D io_futex_remove_all(ctx, tctx, cancel_all); ret |=3D io_uring_try_cancel_uring_cmd(ctx, tctx, cancel_all); - mutex_unlock(&ctx->uring_lock); + io_ring_ctx_unlock(ctx, &lock_state); ret |=3D io_kill_timeouts(ctx, tctx, cancel_all); if (tctx) ret |=3D io_run_task_work() > 0; else ret |=3D flush_delayed_work(&ctx->fallback_work); diff --git a/io_uring/eventfd.c b/io_uring/eventfd.c index 78f8ab7db104..c21ec95cb36c 100644 --- a/io_uring/eventfd.c +++ b/io_uring/eventfd.c @@ -6,10 +6,11 @@ #include #include #include #include =20 +#include "io_uring.h" #include "io-wq.h" #include "eventfd.h" =20 struct io_ev_fd { struct eventfd_ctx *cq_ev_fd; @@ -118,11 +119,11 @@ int io_eventfd_register(struct io_ring_ctx *ctx, void= __user *arg, struct io_ev_fd *ev_fd; __s32 __user *fds =3D arg; int fd; =20 ev_fd =3D rcu_dereference_protected(ctx->io_ev_fd, - lockdep_is_held(&ctx->uring_lock)); + io_ring_register_ctx_lock_held(ctx)); if (ev_fd) return -EBUSY; =20 if (copy_from_user(&fd, fds, sizeof(*fds))) return -EFAULT; @@ -154,11 +155,11 @@ int io_eventfd_register(struct io_ring_ctx *ctx, void= __user *arg, int io_eventfd_unregister(struct io_ring_ctx *ctx) { struct io_ev_fd *ev_fd; =20 ev_fd =3D rcu_dereference_protected(ctx->io_ev_fd, - lockdep_is_held(&ctx->uring_lock)); + io_ring_register_ctx_lock_held(ctx)); if (ev_fd) { ctx->has_evfd =3D false; rcu_assign_pointer(ctx->io_ev_fd, NULL); io_eventfd_put(ev_fd); return 0; diff --git a/io_uring/fdinfo.c b/io_uring/fdinfo.c index a87d4e26eee8..2cb362fc8e80 100644 --- a/io_uring/fdinfo.c +++ b/io_uring/fdinfo.c @@ -75,11 +75,11 @@ static void __io_uring_show_fdinfo(struct io_ring_ctx *= ctx, struct seq_file *m) if (ctx->flags & IORING_SETUP_SQE128) sq_shift =3D 1; =20 /* * we may get imprecise sqe and cqe info if uring is actively running - * since we get cached_sq_head and cached_cq_tail without uring_lock + * since we get cached_sq_head and cached_cq_tail without ctx uring lock * and sq_tail and cq_head are changed by userspace. But it's ok since * we usually use these info when it is stuck. */ seq_printf(m, "SqMask:\t0x%x\n", sq_mask); seq_printf(m, "SqHead:\t%u\n", sq_head); @@ -255,10 +255,10 @@ __cold void io_uring_show_fdinfo(struct seq_file *m, = struct file *file) /* * Avoid ABBA deadlock between the seq lock and the io_uring mutex, * since fdinfo case grabs it in the opposite direction of normal use * cases. */ - if (mutex_trylock(&ctx->uring_lock)) { + if (io_ring_ctx_trylock(ctx)) { __io_uring_show_fdinfo(ctx, m); - mutex_unlock(&ctx->uring_lock); + io_ring_ctx_unlock(ctx, NULL); } } diff --git a/io_uring/filetable.c b/io_uring/filetable.c index 794ef95df293..40ad4a08dc89 100644 --- a/io_uring/filetable.c +++ b/io_uring/filetable.c @@ -55,14 +55,15 @@ void io_free_file_tables(struct io_ring_ctx *ctx, struc= t io_file_table *table) table->bitmap =3D NULL; } =20 static int io_install_fixed_file(struct io_ring_ctx *ctx, struct file *fil= e, u32 slot_index) - __must_hold(&ctx->uring_lock) { struct io_rsrc_node *node; =20 + io_ring_ctx_assert_locked(ctx); + if (io_is_uring_fops(file)) return -EBADF; if (!ctx->file_table.data.nr) return -ENXIO; if (slot_index >=3D ctx->file_table.data.nr) @@ -105,16 +106,17 @@ int __io_fixed_fd_install(struct io_ring_ctx *ctx, st= ruct file *file, * fput() is called correspondingly. */ int io_fixed_fd_install(struct io_kiocb *req, unsigned int issue_flags, struct file *file, unsigned int file_slot) { + struct io_ring_ctx_lock_state lock_state; struct io_ring_ctx *ctx =3D req->ctx; int ret; =20 - io_ring_submit_lock(ctx, issue_flags); + io_ring_submit_lock(ctx, issue_flags, &lock_state); ret =3D __io_fixed_fd_install(ctx, file, file_slot); - io_ring_submit_unlock(ctx, issue_flags); + io_ring_submit_unlock(ctx, issue_flags, &lock_state); =20 if (unlikely(ret < 0)) fput(file); return ret; } diff --git a/io_uring/futex.c b/io_uring/futex.c index 11bfff5a80df..aeda00981c7a 100644 --- a/io_uring/futex.c +++ b/io_uring/futex.c @@ -220,22 +220,23 @@ static void io_futex_wake_fn(struct wake_q_head *wake= _q, struct futex_q *q) =20 int io_futexv_wait(struct io_kiocb *req, unsigned int issue_flags) { struct io_futex *iof =3D io_kiocb_to_cmd(req, struct io_futex); struct io_futexv_data *ifd =3D req->async_data; + struct io_ring_ctx_lock_state lock_state; struct io_ring_ctx *ctx =3D req->ctx; int ret, woken =3D -1; =20 - io_ring_submit_lock(ctx, issue_flags); + io_ring_submit_lock(ctx, issue_flags, &lock_state); =20 ret =3D futex_wait_multiple_setup(ifd->futexv, iof->futex_nr, &woken); =20 /* * Error case, ret is < 0. Mark the request as failed. */ if (unlikely(ret < 0)) { - io_ring_submit_unlock(ctx, issue_flags); + io_ring_submit_unlock(ctx, issue_flags, &lock_state); req_set_fail(req); io_req_set_res(req, ret, 0); io_req_async_data_free(req); return IOU_COMPLETE; } @@ -265,27 +266,28 @@ int io_futexv_wait(struct io_kiocb *req, unsigned int= issue_flags) iof->futexv_unqueued =3D 1; if (woken !=3D -1) io_req_set_res(req, woken, 0); } =20 - io_ring_submit_unlock(ctx, issue_flags); + io_ring_submit_unlock(ctx, issue_flags, &lock_state); return IOU_ISSUE_SKIP_COMPLETE; } =20 int io_futex_wait(struct io_kiocb *req, unsigned int issue_flags) { struct io_futex *iof =3D io_kiocb_to_cmd(req, struct io_futex); + struct io_ring_ctx_lock_state lock_state; struct io_ring_ctx *ctx =3D req->ctx; struct io_futex_data *ifd =3D NULL; int ret; =20 if (!iof->futex_mask) { ret =3D -EINVAL; goto done; } =20 - io_ring_submit_lock(ctx, issue_flags); + io_ring_submit_lock(ctx, issue_flags, &lock_state); ifd =3D io_cache_alloc(&ctx->futex_cache, GFP_NOWAIT); if (!ifd) { ret =3D -ENOMEM; goto done_unlock; } @@ -299,17 +301,17 @@ int io_futex_wait(struct io_kiocb *req, unsigned int = issue_flags) =20 ret =3D futex_wait_setup(iof->uaddr, iof->futex_val, iof->futex_flags, &ifd->q, NULL, NULL); if (!ret) { hlist_add_head(&req->hash_node, &ctx->futex_list); - io_ring_submit_unlock(ctx, issue_flags); + io_ring_submit_unlock(ctx, issue_flags, &lock_state); =20 return IOU_ISSUE_SKIP_COMPLETE; } =20 done_unlock: - io_ring_submit_unlock(ctx, issue_flags); + io_ring_submit_unlock(ctx, issue_flags, &lock_state); done: if (ret < 0) req_set_fail(req); io_req_set_res(req, ret, 0); io_req_async_data_free(req); diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c index d7aaa6e4bfe4..e05e56a840f9 100644 --- a/io_uring/io_uring.c +++ b/io_uring/io_uring.c @@ -234,20 +234,21 @@ static inline bool io_should_terminate_tw(struct io_r= ing_ctx *ctx) static __cold void io_fallback_req_func(struct work_struct *work) { struct io_ring_ctx *ctx =3D container_of(work, struct io_ring_ctx, fallback_work.work); struct llist_node *node =3D llist_del_all(&ctx->fallback_llist); + struct io_ring_ctx_lock_state lock_state; struct io_kiocb *req, *tmp; struct io_tw_state ts =3D {}; =20 percpu_ref_get(&ctx->refs); - mutex_lock(&ctx->uring_lock); + io_ring_ctx_lock(ctx, &lock_state); ts.cancel =3D io_should_terminate_tw(ctx); llist_for_each_entry_safe(req, tmp, node, io_task_work.node) req->io_task_work.func((struct io_tw_req){req}, ts); io_submit_flush_completions(ctx); - mutex_unlock(&ctx->uring_lock); + io_ring_ctx_unlock(ctx, &lock_state); percpu_ref_put(&ctx->refs); } =20 static int io_alloc_hash_table(struct io_hash_table *table, unsigned bits) { @@ -514,11 +515,11 @@ unsigned io_linked_nr(struct io_kiocb *req) =20 static __cold noinline void io_queue_deferred(struct io_ring_ctx *ctx) { bool drain_seen =3D false, first =3D true; =20 - lockdep_assert_held(&ctx->uring_lock); + io_ring_ctx_assert_locked(ctx); __io_req_caches_free(ctx); =20 while (!list_empty(&ctx->defer_list)) { struct io_defer_entry *de =3D list_first_entry(&ctx->defer_list, struct io_defer_entry, list); @@ -577,13 +578,15 @@ static void io_cq_unlock_post(struct io_ring_ctx *ctx) spin_unlock(&ctx->completion_lock); io_cqring_wake(ctx); io_commit_cqring_flush(ctx); } =20 -static void __io_cqring_overflow_flush(struct io_ring_ctx *ctx, bool dying) +static void +__io_cqring_overflow_flush(struct io_ring_ctx *ctx, bool dying, + struct io_ring_ctx_lock_state *lock_state) { - lockdep_assert_held(&ctx->uring_lock); + io_ring_ctx_assert_locked(ctx); =20 /* don't abort if we're dying, entries must get freed */ if (!dying && __io_cqring_events(ctx) =3D=3D ctx->cq_entries) return; =20 @@ -618,13 +621,13 @@ static void __io_cqring_overflow_flush(struct io_ring= _ctx *ctx, bool dying) * to care for a non-real case. */ if (need_resched()) { ctx->cqe_sentinel =3D ctx->cqe_cached; io_cq_unlock_post(ctx); - mutex_unlock(&ctx->uring_lock); + io_ring_ctx_unlock(ctx, lock_state); cond_resched(); - mutex_lock(&ctx->uring_lock); + io_ring_ctx_lock(ctx, lock_state); io_cq_lock(ctx); } } =20 if (list_empty(&ctx->cq_overflow_list)) { @@ -632,21 +635,24 @@ static void __io_cqring_overflow_flush(struct io_ring= _ctx *ctx, bool dying) atomic_andnot(IORING_SQ_CQ_OVERFLOW, &ctx->rings->sq_flags); } io_cq_unlock_post(ctx); } =20 -static void io_cqring_overflow_kill(struct io_ring_ctx *ctx) +static void io_cqring_overflow_kill(struct io_ring_ctx *ctx, + struct io_ring_ctx_lock_state *lock_state) { if (ctx->rings) - __io_cqring_overflow_flush(ctx, true); + __io_cqring_overflow_flush(ctx, true, lock_state); } =20 static void io_cqring_do_overflow_flush(struct io_ring_ctx *ctx) { - mutex_lock(&ctx->uring_lock); - __io_cqring_overflow_flush(ctx, false); - mutex_unlock(&ctx->uring_lock); + struct io_ring_ctx_lock_state lock_state; + + io_ring_ctx_lock(ctx, &lock_state); + __io_cqring_overflow_flush(ctx, false, &lock_state); + io_ring_ctx_unlock(ctx, &lock_state); } =20 /* must to be called somewhat shortly after putting a request */ static inline void io_put_task(struct io_kiocb *req) { @@ -881,15 +887,15 @@ bool io_post_aux_cqe(struct io_ring_ctx *ctx, u64 use= r_data, s32 res, u32 cflags return filled; } =20 /* * Must be called from inline task_work so we know a flush will happen lat= er, - * and obviously with ctx->uring_lock held (tw always has that). + * and obviously with ctx uring lock held (tw always has that). */ void io_add_aux_cqe(struct io_ring_ctx *ctx, u64 user_data, s32 res, u32 c= flags) { - lockdep_assert_held(&ctx->uring_lock); + io_ring_ctx_assert_locked(ctx); lockdep_assert(ctx->lockless_cq); =20 if (!io_fill_cqe_aux(ctx, user_data, res, cflags)) { struct io_cqe cqe =3D io_init_cqe(user_data, res, cflags); =20 @@ -914,11 +920,11 @@ bool io_req_post_cqe(struct io_kiocb *req, s32 res, u= 32 cflags) */ if (!wq_list_empty(&ctx->submit_state.compl_reqs)) __io_submit_flush_completions(ctx); =20 lockdep_assert(!io_wq_current_is_worker()); - lockdep_assert_held(&ctx->uring_lock); + io_ring_ctx_assert_locked(ctx); =20 if (!ctx->lockless_cq) { spin_lock(&ctx->completion_lock); posted =3D io_fill_cqe_aux(ctx, req->cqe.user_data, res, cflags); spin_unlock(&ctx->completion_lock); @@ -938,11 +944,11 @@ bool io_req_post_cqe32(struct io_kiocb *req, struct i= o_uring_cqe cqe[2]) { struct io_ring_ctx *ctx =3D req->ctx; bool posted; =20 lockdep_assert(!io_wq_current_is_worker()); - lockdep_assert_held(&ctx->uring_lock); + io_ring_ctx_assert_locked(ctx); =20 cqe[0].user_data =3D req->cqe.user_data; if (!ctx->lockless_cq) { spin_lock(&ctx->completion_lock); posted =3D io_fill_cqe_aux32(ctx, cqe); @@ -967,11 +973,11 @@ static void io_req_complete_post(struct io_kiocb *req= , unsigned issue_flags) if (WARN_ON_ONCE(!(issue_flags & IO_URING_F_IOWQ))) return; =20 /* * Handle special CQ sync cases via task_work. DEFER_TASKRUN requires - * the submitter task context, IOPOLL protects with uring_lock. + * the submitter task context, IOPOLL protects with ctx uring lock. */ if (ctx->lockless_cq || (req->flags & REQ_F_REISSUE)) { defer_complete: req->io_task_work.func =3D io_req_task_complete; io_req_task_work_add(req); @@ -992,15 +998,14 @@ static void io_req_complete_post(struct io_kiocb *req= , unsigned issue_flags) */ req_ref_put(req); } =20 void io_req_defer_failed(struct io_kiocb *req, s32 res) - __must_hold(&ctx->uring_lock) { const struct io_cold_def *def =3D &io_cold_defs[req->opcode]; =20 - lockdep_assert_held(&req->ctx->uring_lock); + io_ring_ctx_assert_locked(req->ctx); =20 req_set_fail(req); io_req_set_res(req, res, io_put_kbuf(req, res, NULL)); if (def->fail) def->fail(req); @@ -1008,20 +1013,21 @@ void io_req_defer_failed(struct io_kiocb *req, s32 = res) } =20 /* * A request might get retired back into the request caches even before op= code * handlers and io_issue_sqe() are done with it, e.g. inline completion pa= th. - * Because of that, io_alloc_req() should be called only under ->uring_lock + * Because of that, io_alloc_req() should be called only under ctx uring l= ock * and with extra caution to not get a request that is still worked on. */ __cold bool __io_alloc_req_refill(struct io_ring_ctx *ctx) - __must_hold(&ctx->uring_lock) { gfp_t gfp =3D GFP_KERNEL | __GFP_NOWARN | __GFP_ZERO; void *reqs[IO_REQ_ALLOC_BATCH]; int ret; =20 + io_ring_ctx_assert_locked(ctx); + ret =3D kmem_cache_alloc_bulk(req_cachep, gfp, ARRAY_SIZE(reqs), reqs); =20 /* * Bulk alloc is all-or-nothing. If we fail to get a batch, * retry single alloc to be on the safe side. @@ -1078,19 +1084,20 @@ static inline struct io_kiocb *io_req_find_next(str= uct io_kiocb *req) nxt =3D req->link; req->link =3D NULL; return nxt; } =20 -static void ctx_flush_and_put(struct io_ring_ctx *ctx, io_tw_token_t tw) +static void ctx_flush_and_put(struct io_ring_ctx *ctx, io_tw_token_t tw, + struct io_ring_ctx_lock_state *lock_state) { if (!ctx) return; if (ctx->flags & IORING_SETUP_TASKRUN_FLAG) atomic_andnot(IORING_SQ_TASKRUN, &ctx->rings->sq_flags); =20 io_submit_flush_completions(ctx); - mutex_unlock(&ctx->uring_lock); + io_ring_ctx_unlock(ctx, lock_state); percpu_ref_put(&ctx->refs); } =20 /* * Run queued task_work, returning the number of entries processed in *cou= nt. @@ -1099,38 +1106,39 @@ static void ctx_flush_and_put(struct io_ring_ctx *c= tx, io_tw_token_t tw) */ struct llist_node *io_handle_tw_list(struct llist_node *node, unsigned int *count, unsigned int max_entries) { + struct io_ring_ctx_lock_state lock_state; struct io_ring_ctx *ctx =3D NULL; struct io_tw_state ts =3D { }; =20 do { struct llist_node *next =3D node->next; struct io_kiocb *req =3D container_of(node, struct io_kiocb, io_task_work.node); =20 if (req->ctx !=3D ctx) { - ctx_flush_and_put(ctx, ts); + ctx_flush_and_put(ctx, ts, &lock_state); ctx =3D req->ctx; - mutex_lock(&ctx->uring_lock); + io_ring_ctx_lock(ctx, &lock_state); percpu_ref_get(&ctx->refs); ts.cancel =3D io_should_terminate_tw(ctx); } INDIRECT_CALL_2(req->io_task_work.func, io_poll_task_func, io_req_rw_complete, (struct io_tw_req){req}, ts); node =3D next; (*count)++; if (unlikely(need_resched())) { - ctx_flush_and_put(ctx, ts); + ctx_flush_and_put(ctx, ts, &lock_state); ctx =3D NULL; cond_resched(); } } while (node && *count < max_entries); =20 - ctx_flush_and_put(ctx, ts); + ctx_flush_and_put(ctx, ts, &lock_state); return node; } =20 static __cold void __io_fallback_tw(struct llist_node *node, bool sync) { @@ -1399,16 +1407,17 @@ static inline int io_run_local_work_locked(struct i= o_ring_ctx *ctx, max(IO_LOCAL_TW_DEFAULT_MAX, min_events)); } =20 int io_run_local_work(struct io_ring_ctx *ctx, int min_events, int max_eve= nts) { + struct io_ring_ctx_lock_state lock_state; struct io_tw_state ts =3D {}; int ret; =20 - mutex_lock(&ctx->uring_lock); + io_ring_ctx_lock(ctx, &lock_state); ret =3D __io_run_local_work(ctx, ts, min_events, max_events); - mutex_unlock(&ctx->uring_lock); + io_ring_ctx_unlock(ctx, &lock_state); return ret; } =20 static void io_req_task_cancel(struct io_tw_req tw_req, io_tw_token_t tw) { @@ -1463,12 +1472,13 @@ static inline void io_req_put_rsrc_nodes(struct io_= kiocb *req) io_put_rsrc_node(req->ctx, req->buf_node); } =20 static void io_free_batch_list(struct io_ring_ctx *ctx, struct io_wq_work_node *node) - __must_hold(&ctx->uring_lock) { + io_ring_ctx_assert_locked(ctx); + do { struct io_kiocb *req =3D container_of(node, struct io_kiocb, comp_list); =20 if (unlikely(req->flags & IO_REQ_CLEAN_SLOW_FLAGS)) { @@ -1504,15 +1514,16 @@ static void io_free_batch_list(struct io_ring_ctx *= ctx, io_req_add_to_cache(req, ctx); } while (node); } =20 void __io_submit_flush_completions(struct io_ring_ctx *ctx) - __must_hold(&ctx->uring_lock) { struct io_submit_state *state =3D &ctx->submit_state; struct io_wq_work_node *node; =20 + io_ring_ctx_assert_locked(ctx); + __io_cq_lock(ctx); __wq_list_for_each(node, &state->compl_reqs) { struct io_kiocb *req =3D container_of(node, struct io_kiocb, comp_list); =20 @@ -1553,51 +1564,54 @@ static unsigned io_cqring_events(struct io_ring_ctx= *ctx) * We can't just wait for polled events to come to us, we have to actively * find and complete them. */ __cold void io_iopoll_try_reap_events(struct io_ring_ctx *ctx) { + struct io_ring_ctx_lock_state lock_state; + if (!(ctx->flags & IORING_SETUP_IOPOLL)) return; =20 - mutex_lock(&ctx->uring_lock); + io_ring_ctx_lock(ctx, &lock_state); while (!wq_list_empty(&ctx->iopoll_list)) { /* let it sleep and repeat later if can't complete a request */ if (io_do_iopoll(ctx, true) =3D=3D 0) break; /* * Ensure we allow local-to-the-cpu processing to take place, * in this case we need to ensure that we reap all events. * Also let task_work, etc. to progress by releasing the mutex */ if (need_resched()) { - mutex_unlock(&ctx->uring_lock); + io_ring_ctx_unlock(ctx, &lock_state); cond_resched(); - mutex_lock(&ctx->uring_lock); + io_ring_ctx_lock(ctx, &lock_state); } } - mutex_unlock(&ctx->uring_lock); + io_ring_ctx_unlock(ctx, &lock_state); =20 if (ctx->flags & IORING_SETUP_DEFER_TASKRUN) io_move_task_work_from_local(ctx); } =20 -static int io_iopoll_check(struct io_ring_ctx *ctx, unsigned int min_event= s) +static int io_iopoll_check(struct io_ring_ctx *ctx, unsigned int min_event= s, + struct io_ring_ctx_lock_state *lock_state) { unsigned int nr_events =3D 0; unsigned long check_cq; =20 min_events =3D min(min_events, ctx->cq_entries); =20 - lockdep_assert_held(&ctx->uring_lock); + io_ring_ctx_assert_locked(ctx); =20 if (!io_allowed_run_tw(ctx)) return -EEXIST; =20 check_cq =3D READ_ONCE(ctx->check_cq); if (unlikely(check_cq)) { if (check_cq & BIT(IO_CHECK_CQ_OVERFLOW_BIT)) - __io_cqring_overflow_flush(ctx, false); + __io_cqring_overflow_flush(ctx, false, lock_state); /* * Similarly do not spin if we have not informed the user of any * dropped CQE. */ if (check_cq & BIT(IO_CHECK_CQ_DROPPED_BIT)) @@ -1615,11 +1629,11 @@ static int io_iopoll_check(struct io_ring_ctx *ctx,= unsigned int min_events) int ret =3D 0; =20 /* * If a submit got punted to a workqueue, we can have the * application entering polling for a command before it gets - * issued. That app will hold the uring_lock for the duration + * issued. That app holds the ctx uring lock for the duration * of the poll right here, so we need to take a breather every * now and then to ensure that the issue has a chance to add * the poll to the issued list. Otherwise we can spin here * forever, while the workqueue is stuck trying to acquire the * very same mutex. @@ -1630,13 +1644,13 @@ static int io_iopoll_check(struct io_ring_ctx *ctx,= unsigned int min_events) =20 (void) io_run_local_work_locked(ctx, min_events); =20 if (task_work_pending(current) || wq_list_empty(&ctx->iopoll_list)) { - mutex_unlock(&ctx->uring_lock); + io_ring_ctx_unlock(ctx, lock_state); io_run_task_work(); - mutex_lock(&ctx->uring_lock); + io_ring_ctx_lock(ctx, lock_state); } /* some requests don't go through iopoll_list */ if (tail !=3D ctx->cached_cq_tail || wq_list_empty(&ctx->iopoll_list)) break; @@ -1667,14 +1681,15 @@ void io_req_task_complete(struct io_tw_req tw_req, = io_tw_token_t tw) * find it from a io_do_iopoll() thread before the issuer is done * accessing the kiocb cookie. */ static void io_iopoll_req_issued(struct io_kiocb *req, unsigned int issue_= flags) { + struct io_ring_ctx_lock_state lock_state; struct io_ring_ctx *ctx =3D req->ctx; =20 - /* workqueue context doesn't hold uring_lock, grab it now */ - io_ring_submit_lock(ctx, issue_flags); + /* workqueue context doesn't hold ctx uring lock, grab it now */ + io_ring_submit_lock(ctx, issue_flags, &lock_state); =20 /* * Track whether we have multiple files in our lists. This will impact * how we do polling eventually, not spinning if we're on potentially * different devices. @@ -1708,11 +1723,11 @@ static void io_iopoll_req_issued(struct io_kiocb *r= eq, unsigned int issue_flags) */ if ((ctx->flags & IORING_SETUP_SQPOLL) && wq_has_sleeper(&ctx->sq_data->wait)) wake_up(&ctx->sq_data->wait); =20 - mutex_unlock(&ctx->uring_lock); + io_ring_ctx_unlock(ctx, &lock_state); } } =20 io_req_flags_t io_file_get_flags(struct file *file) { @@ -1726,16 +1741,17 @@ io_req_flags_t io_file_get_flags(struct file *file) res |=3D REQ_F_SUPPORT_NOWAIT; return res; } =20 static __cold void io_drain_req(struct io_kiocb *req) - __must_hold(&ctx->uring_lock) { struct io_ring_ctx *ctx =3D req->ctx; bool drain =3D req->flags & IOSQE_IO_DRAIN; struct io_defer_entry *de; =20 + io_ring_ctx_assert_locked(ctx); + de =3D kmalloc(sizeof(*de), GFP_KERNEL_ACCOUNT); if (!de) { io_req_defer_failed(req, -ENOMEM); return; } @@ -1958,23 +1974,24 @@ void io_wq_submit_work(struct io_wq_work *work) } =20 inline struct file *io_file_get_fixed(struct io_kiocb *req, int fd, unsigned int issue_flags) { + struct io_ring_ctx_lock_state lock_state; struct io_ring_ctx *ctx =3D req->ctx; struct io_rsrc_node *node; struct file *file =3D NULL; =20 - io_ring_submit_lock(ctx, issue_flags); + io_ring_submit_lock(ctx, issue_flags, &lock_state); node =3D io_rsrc_node_lookup(&ctx->file_table.data, fd); if (node) { node->refs++; req->file_node =3D node; req->flags |=3D io_slot_flags(node); file =3D io_slot_file(node); } - io_ring_submit_unlock(ctx, issue_flags); + io_ring_submit_unlock(ctx, issue_flags, &lock_state); return file; } =20 struct file *io_file_get_normal(struct io_kiocb *req, int fd) { @@ -2002,12 +2019,13 @@ static int io_req_sqe_copy(struct io_kiocb *req, un= signed int issue_flags) def->sqe_copy(req); return 0; } =20 static void io_queue_async(struct io_kiocb *req, unsigned int issue_flags,= int ret) - __must_hold(&req->ctx->uring_lock) { + io_ring_ctx_assert_locked(req->ctx); + if (ret !=3D -EAGAIN || (req->flags & REQ_F_NOWAIT)) { fail: io_req_defer_failed(req, ret); return; } @@ -2027,16 +2045,17 @@ static void io_queue_async(struct io_kiocb *req, un= signed int issue_flags, int r break; } } =20 static inline void io_queue_sqe(struct io_kiocb *req, unsigned int extra_f= lags) - __must_hold(&req->ctx->uring_lock) { unsigned int issue_flags =3D IO_URING_F_NONBLOCK | IO_URING_F_COMPLETE_DEFER | extra_flags; int ret; =20 + io_ring_ctx_assert_locked(req->ctx); + ret =3D io_issue_sqe(req, issue_flags); =20 /* * We async punt it if the file wasn't marked NOWAIT, or if the file * doesn't support non-blocking read/write attempts @@ -2044,12 +2063,13 @@ static inline void io_queue_sqe(struct io_kiocb *re= q, unsigned int extra_flags) if (unlikely(ret)) io_queue_async(req, issue_flags, ret); } =20 static void io_queue_sqe_fallback(struct io_kiocb *req) - __must_hold(&req->ctx->uring_lock) { + io_ring_ctx_assert_locked(req->ctx); + if (unlikely(req->flags & REQ_F_FAIL)) { /* * We don't submit, fail them all, for that replace hardlinks * with normal links. Extra REQ_F_LINK is tolerated. */ @@ -2114,17 +2134,18 @@ static __cold int io_init_fail_req(struct io_kiocb = *req, int err) return err; } =20 static int io_init_req(struct io_ring_ctx *ctx, struct io_kiocb *req, const struct io_uring_sqe *sqe, unsigned int *left) - __must_hold(&ctx->uring_lock) { const struct io_issue_def *def; unsigned int sqe_flags; int personality; u8 opcode; =20 + io_ring_ctx_assert_locked(ctx); + req->ctx =3D ctx; req->opcode =3D opcode =3D READ_ONCE(sqe->opcode); /* same numerical values with corresponding REQ_F_*, safe to copy */ sqe_flags =3D READ_ONCE(sqe->flags); req->flags =3D (__force io_req_flags_t) sqe_flags; @@ -2267,15 +2288,16 @@ static __cold int io_submit_fail_init(const struct = io_uring_sqe *sqe, return 0; } =20 static inline int io_submit_sqe(struct io_ring_ctx *ctx, struct io_kiocb *= req, const struct io_uring_sqe *sqe, unsigned int *left) - __must_hold(&ctx->uring_lock) { struct io_submit_link *link =3D &ctx->submit_state.link; int ret; =20 + io_ring_ctx_assert_locked(ctx); + ret =3D io_init_req(ctx, req, sqe, left); if (unlikely(ret)) return io_submit_fail_init(sqe, req, ret); =20 trace_io_uring_submit_req(req); @@ -2396,16 +2418,17 @@ static bool io_get_sqe(struct io_ring_ctx *ctx, con= st struct io_uring_sqe **sqe) *sqe =3D &ctx->sq_sqes[head]; return true; } =20 int io_submit_sqes(struct io_ring_ctx *ctx, unsigned int nr) - __must_hold(&ctx->uring_lock) { unsigned int entries =3D io_sqring_entries(ctx); unsigned int left; int ret; =20 + io_ring_ctx_assert_locked(ctx); + entries =3D min(nr, entries); if (unlikely(!entries)) return 0; =20 ret =3D left =3D entries; @@ -2825,28 +2848,33 @@ static __cold void __io_req_caches_free(struct io_r= ing_ctx *ctx) } } =20 static __cold void io_req_caches_free(struct io_ring_ctx *ctx) { - guard(mutex)(&ctx->uring_lock); + struct io_ring_ctx_lock_state lock_state; + + io_ring_ctx_lock(ctx, &lock_state); __io_req_caches_free(ctx); + io_ring_ctx_unlock(ctx, &lock_state); } =20 static __cold void io_ring_ctx_free(struct io_ring_ctx *ctx) { + struct io_ring_ctx_lock_state lock_state; + io_sq_thread_finish(ctx); =20 - mutex_lock(&ctx->uring_lock); + io_ring_ctx_lock(ctx, &lock_state); io_sqe_buffers_unregister(ctx); io_sqe_files_unregister(ctx); io_unregister_zcrx_ifqs(ctx); - io_cqring_overflow_kill(ctx); + io_cqring_overflow_kill(ctx, &lock_state); io_eventfd_unregister(ctx); io_free_alloc_caches(ctx); io_destroy_buffers(ctx); io_free_region(ctx->user, &ctx->param_region); - mutex_unlock(&ctx->uring_lock); + io_ring_ctx_unlock(ctx, &lock_state); if (ctx->sq_creds) put_cred(ctx->sq_creds); if (ctx->submitter_task) put_task_struct(ctx->submitter_task); =20 @@ -2877,14 +2905,15 @@ static __cold void io_ring_ctx_free(struct io_ring_= ctx *ctx) =20 static __cold void io_activate_pollwq_cb(struct callback_head *cb) { struct io_ring_ctx *ctx =3D container_of(cb, struct io_ring_ctx, poll_wq_task_work); + struct io_ring_ctx_lock_state lock_state; =20 - mutex_lock(&ctx->uring_lock); + io_ring_ctx_lock(ctx, &lock_state); ctx->poll_activated =3D true; - mutex_unlock(&ctx->uring_lock); + io_ring_ctx_unlock(ctx, &lock_state); =20 /* * Wake ups for some events between start of polling and activation * might've been lost due to loose synchronisation. */ @@ -2974,10 +3003,11 @@ static __cold void io_tctx_exit_cb(struct callback_= head *cb) } =20 static __cold void io_ring_exit_work(struct work_struct *work) { struct io_ring_ctx *ctx =3D container_of(work, struct io_ring_ctx, exit_w= ork); + struct io_ring_ctx_lock_state lock_state; unsigned long timeout =3D jiffies + HZ * 60 * 5; unsigned long interval =3D HZ / 20; struct io_tctx_exit exit; struct io_tctx_node *node; int ret; @@ -2988,13 +3018,13 @@ static __cold void io_ring_exit_work(struct work_st= ruct *work) * we're waiting for refs to drop. We need to reap these manually, * as nobody else will be looking for them. */ do { if (test_bit(IO_CHECK_CQ_OVERFLOW_BIT, &ctx->check_cq)) { - mutex_lock(&ctx->uring_lock); - io_cqring_overflow_kill(ctx); - mutex_unlock(&ctx->uring_lock); + io_ring_ctx_lock(ctx, &lock_state); + io_cqring_overflow_kill(ctx, &lock_state); + io_ring_ctx_unlock(ctx, &lock_state); } =20 if (ctx->flags & IORING_SETUP_DEFER_TASKRUN) io_move_task_work_from_local(ctx); =20 @@ -3035,11 +3065,11 @@ static __cold void io_ring_exit_work(struct work_st= ruct *work) =20 init_completion(&exit.completion); init_task_work(&exit.task_work, io_tctx_exit_cb); exit.ctx =3D ctx; =20 - mutex_lock(&ctx->uring_lock); + io_ring_ctx_lock(ctx, &lock_state); while (!list_empty(&ctx->tctx_list)) { WARN_ON_ONCE(time_after(jiffies, timeout)); =20 node =3D list_first_entry(&ctx->tctx_list, struct io_tctx_node, ctx_node); @@ -3047,20 +3077,20 @@ static __cold void io_ring_exit_work(struct work_st= ruct *work) list_rotate_left(&ctx->tctx_list); ret =3D task_work_add(node->task, &exit.task_work, TWA_SIGNAL); if (WARN_ON_ONCE(ret)) continue; =20 - mutex_unlock(&ctx->uring_lock); + io_ring_ctx_unlock(ctx, &lock_state); /* * See comment above for * wait_for_completion_interruptible_timeout() on why this * wait is marked as interruptible. */ wait_for_completion_interruptible(&exit.completion); - mutex_lock(&ctx->uring_lock); + io_ring_ctx_lock(ctx, &lock_state); } - mutex_unlock(&ctx->uring_lock); + io_ring_ctx_unlock(ctx, &lock_state); spin_lock(&ctx->completion_lock); spin_unlock(&ctx->completion_lock); =20 /* pairs with RCU read section in io_req_local_work_add() */ if (ctx->flags & IORING_SETUP_DEFER_TASKRUN) @@ -3069,18 +3099,19 @@ static __cold void io_ring_exit_work(struct work_st= ruct *work) io_ring_ctx_free(ctx); } =20 static __cold void io_ring_ctx_wait_and_kill(struct io_ring_ctx *ctx) { + struct io_ring_ctx_lock_state lock_state; unsigned long index; struct creds *creds; =20 - mutex_lock(&ctx->uring_lock); + io_ring_ctx_lock(ctx, &lock_state); percpu_ref_kill(&ctx->refs); xa_for_each(&ctx->personalities, index, creds) io_unregister_personality(ctx, index); - mutex_unlock(&ctx->uring_lock); + io_ring_ctx_unlock(ctx, &lock_state); =20 flush_delayed_work(&ctx->fallback_work); =20 INIT_WORK(&ctx->exit_work, io_ring_exit_work); /* @@ -3211,10 +3242,11 @@ static int io_get_ext_arg(struct io_ring_ctx *ctx, = unsigned flags, =20 SYSCALL_DEFINE6(io_uring_enter, unsigned int, fd, u32, to_submit, u32, min_complete, u32, flags, const void __user *, argp, size_t, argsz) { + struct io_ring_ctx_lock_state lock_state; struct io_ring_ctx *ctx; struct file *file; long ret; =20 if (unlikely(flags & ~IORING_ENTER_FLAGS)) @@ -3267,14 +3299,14 @@ SYSCALL_DEFINE6(io_uring_enter, unsigned int, fd, u= 32, to_submit, } else if (to_submit) { ret =3D io_uring_add_tctx_node(ctx); if (unlikely(ret)) goto out; =20 - mutex_lock(&ctx->uring_lock); + io_ring_ctx_lock(ctx, &lock_state); ret =3D io_submit_sqes(ctx, to_submit); if (ret !=3D to_submit) { - mutex_unlock(&ctx->uring_lock); + io_ring_ctx_unlock(ctx, &lock_state); goto out; } if (flags & IORING_ENTER_GETEVENTS) { if (ctx->syscall_iopoll) goto iopoll_locked; @@ -3283,11 +3315,11 @@ SYSCALL_DEFINE6(io_uring_enter, unsigned int, fd, u= 32, to_submit, * it should handle ownership problems if any. */ if (ctx->flags & IORING_SETUP_DEFER_TASKRUN) (void)io_run_local_work_locked(ctx, min_complete); } - mutex_unlock(&ctx->uring_lock); + io_ring_ctx_unlock(ctx, &lock_state); } =20 if (flags & IORING_ENTER_GETEVENTS) { int ret2; =20 @@ -3296,16 +3328,17 @@ SYSCALL_DEFINE6(io_uring_enter, unsigned int, fd, u= 32, to_submit, * We disallow the app entering submit/complete with * polling, but we still need to lock the ring to * prevent racing with polled issue that got punted to * a workqueue. */ - mutex_lock(&ctx->uring_lock); + io_ring_ctx_lock(ctx, &lock_state); iopoll_locked: ret2 =3D io_validate_ext_arg(ctx, flags, argp, argsz); if (likely(!ret2)) - ret2 =3D io_iopoll_check(ctx, min_complete); - mutex_unlock(&ctx->uring_lock); + ret2 =3D io_iopoll_check(ctx, min_complete, + &lock_state); + io_ring_ctx_unlock(ctx, &lock_state); } else { struct ext_arg ext_arg =3D { .argsz =3D argsz }; =20 ret2 =3D io_get_ext_arg(ctx, flags, argp, &ext_arg); if (likely(!ret2)) @@ -3474,12 +3507,12 @@ static int io_uring_sanitise_params(struct io_uring= _params *p) return -EINVAL; =20 /* * If IORING_SETUP_SQPOLL is set, only the SQ thread issues SQEs, * but other threads may call io_uring_register() concurrently. - * We still need uring_lock to synchronize these io_ring_ctx accesses, - * so disable the single issuer optimizations. + * We still need ctx uring lock to synchronize these io_ring_ctx + * accesses, so disable the single issuer optimizations. */ if (flags & IORING_SETUP_SQPOLL) p->flags &=3D ~IORING_SETUP_SINGLE_ISSUER; =20 return 0; diff --git a/io_uring/io_uring.h b/io_uring/io_uring.h index a790c16854d3..23dae0af530b 100644 --- a/io_uring/io_uring.h +++ b/io_uring/io_uring.h @@ -195,20 +195,98 @@ void io_queue_next(struct io_kiocb *req); void io_task_refs_refill(struct io_uring_task *tctx); bool __io_alloc_req_refill(struct io_ring_ctx *ctx); =20 void io_activate_pollwq(struct io_ring_ctx *ctx); =20 +struct io_ring_ctx_lock_state { +}; + +/* Acquire the ctx uring lock */ +static inline void io_ring_ctx_lock(struct io_ring_ctx *ctx, + struct io_ring_ctx_lock_state *state) +{ + mutex_lock(&ctx->uring_lock); +} + +/* Attempt to acquire the ctx uring lock without blocking */ +static inline bool io_ring_ctx_trylock(struct io_ring_ctx *ctx) +{ + return mutex_trylock(&ctx->uring_lock); +} + +/* Release the ctx uring lock */ +static inline void io_ring_ctx_unlock(struct io_ring_ctx *ctx, + struct io_ring_ctx_lock_state *state) +{ + mutex_unlock(&ctx->uring_lock); +} + +/* Assert (if CONFIG_LOCKDEP) that the ctx uring lock is held */ +static inline void io_ring_ctx_assert_locked(const struct io_ring_ctx *ctx) +{ + lockdep_assert_held(&ctx->uring_lock); +} + +/* Acquire the ctx uring lock during the io_uring_register() syscall */ +static inline void io_ring_register_ctx_lock(struct io_ring_ctx *ctx) +{ + mutex_lock(&ctx->uring_lock); +} + +/* + * Acquire the ctx uring lock with the given nesting level + * during the io_uring_register() syscall + */ +static inline void +io_ring_register_ctx_lock_nested(struct io_ring_ctx *ctx, unsigned int sub= class) +{ + mutex_lock_nested(&ctx->uring_lock, subclass); +} + +/* Release the ctx uring lock during the io_uring_register() syscall */ +static inline void io_ring_register_ctx_unlock(struct io_ring_ctx *ctx) +{ + mutex_unlock(&ctx->uring_lock); +} + +/* + * Return (if CONFIG_LOCKDEP) whether the ctx uring lock is held + * during the io_uring_register() syscall + */ +static inline bool io_ring_register_ctx_lock_held(const struct io_ring_ctx= *ctx) +{ + return lockdep_is_held(&ctx->uring_lock); +} + +/* + * Assert (if CONFIG_LOCKDEP) that the ctx uring lock is held + * during the io_uring_register() syscall. Implies io_ring_ctx_assert_lock= ed(). + */ +static inline void +io_ring_register_ctx_assert_locked(const struct io_ring_ctx *ctx) +{ + lockdep_assert_held(&ctx->uring_lock); +} + +/* Annotations for functions called during the io_uring_register() syscall= */ +/* Must be called with the ctx uring lock held */ +#define must_hold_io_ring_register_ctx_lock(ctx) __must_hold(&(ctx)->uring= _lock) +/* Acquires the ctx uring lock */ +#define releases_io_ring_register_ctx_lock(ctx) __releases((ctx)->uring_lo= ck) +/* Releases the ctx uring lock */ +#define acquires_io_ring_register_ctx_lock(ctx) __acquires((ctx)->uring_lo= ck) + static inline void io_lockdep_assert_cq_locked(struct io_ring_ctx *ctx) { #if defined(CONFIG_PROVE_LOCKING) lockdep_assert(in_task()); =20 if (ctx->flags & IORING_SETUP_DEFER_TASKRUN) - lockdep_assert_held(&ctx->uring_lock); + io_ring_ctx_assert_locked(ctx); =20 if (ctx->flags & IORING_SETUP_IOPOLL) { - lockdep_assert_held(&ctx->uring_lock); + io_ring_ctx_assert_locked(ctx); } else if (!ctx->task_complete) { lockdep_assert_held(&ctx->completion_lock); } else if (ctx->submitter_task) { /* * ->submitter_task may be NULL and we can still post a CQE, @@ -373,30 +451,32 @@ static inline void io_put_file(struct io_kiocb *req) { if (!(req->flags & REQ_F_FIXED_FILE) && req->file) fput(req->file); } =20 -static inline void io_ring_submit_unlock(struct io_ring_ctx *ctx, - unsigned issue_flags) +static inline void +io_ring_submit_unlock(struct io_ring_ctx *ctx, unsigned issue_flags, + struct io_ring_ctx_lock_state *lock_state) { - lockdep_assert_held(&ctx->uring_lock); + io_ring_ctx_assert_locked(ctx); if (unlikely(issue_flags & IO_URING_F_UNLOCKED)) - mutex_unlock(&ctx->uring_lock); + io_ring_ctx_unlock(ctx, lock_state); } =20 -static inline void io_ring_submit_lock(struct io_ring_ctx *ctx, - unsigned issue_flags) +static inline void +io_ring_submit_lock(struct io_ring_ctx *ctx, unsigned issue_flags, + struct io_ring_ctx_lock_state *lock_state) { /* - * "Normal" inline submissions always hold the uring_lock, since we + * "Normal" inline submissions always hold the ctx uring lock, since we * grab it from the system call. Same is true for the SQPOLL offload. * The only exception is when we've detached the request and issue it * from an async worker thread, grab the lock for that case. */ if (unlikely(issue_flags & IO_URING_F_UNLOCKED)) - mutex_lock(&ctx->uring_lock); - lockdep_assert_held(&ctx->uring_lock); + io_ring_ctx_lock(ctx, lock_state); + io_ring_ctx_assert_locked(ctx); } =20 static inline void io_commit_cqring(struct io_ring_ctx *ctx) { /* order cqe stores with ring update */ @@ -504,24 +584,23 @@ static inline bool io_task_work_pending(struct io_rin= g_ctx *ctx) return task_work_pending(current) || io_local_work_pending(ctx); } =20 static inline void io_tw_lock(struct io_ring_ctx *ctx, io_tw_token_t tw) { - lockdep_assert_held(&ctx->uring_lock); + io_ring_ctx_assert_locked(ctx); } =20 /* * Don't complete immediately but use deferred completion infrastructure. - * Protected by ->uring_lock and can only be used either with + * Protected by ctx uring lock and can only be used either with * IO_URING_F_COMPLETE_DEFER or inside a tw handler holding the mutex. */ static inline void io_req_complete_defer(struct io_kiocb *req) - __must_hold(&req->ctx->uring_lock) { struct io_submit_state *state =3D &req->ctx->submit_state; =20 - lockdep_assert_held(&req->ctx->uring_lock); + io_ring_ctx_assert_locked(req->ctx); =20 wq_list_add_tail(&req->comp_list, &state->compl_reqs); } =20 static inline void io_commit_cqring_flush(struct io_ring_ctx *ctx) diff --git a/io_uring/kbuf.c b/io_uring/kbuf.c index 8a329556f8df..592e3d3a42ee 100644 --- a/io_uring/kbuf.c +++ b/io_uring/kbuf.c @@ -72,22 +72,22 @@ bool io_kbuf_commit(struct io_kiocb *req, } =20 static inline struct io_buffer_list *io_buffer_get_list(struct io_ring_ctx= *ctx, unsigned int bgid) { - lockdep_assert_held(&ctx->uring_lock); + io_ring_ctx_assert_locked(ctx); =20 return xa_load(&ctx->io_bl_xa, bgid); } =20 static int io_buffer_add_list(struct io_ring_ctx *ctx, struct io_buffer_list *bl, unsigned int bgid) { /* * Store buffer group ID and finally mark the list as visible. * The normal lookup doesn't care about the visibility as we're - * always under the ->uring_lock, but lookups from mmap do. + * always under the ctx uring lock, but lookups from mmap do. */ bl->bgid =3D bgid; guard(mutex)(&ctx->mmap_lock); return xa_err(xa_store(&ctx->io_bl_xa, bgid, bl, GFP_KERNEL)); } @@ -101,23 +101,24 @@ void io_kbuf_drop_legacy(struct io_kiocb *req) req->kbuf =3D NULL; } =20 bool io_kbuf_recycle_legacy(struct io_kiocb *req, unsigned issue_flags) { + struct io_ring_ctx_lock_state lock_state; struct io_ring_ctx *ctx =3D req->ctx; struct io_buffer_list *bl; struct io_buffer *buf; =20 - io_ring_submit_lock(ctx, issue_flags); + io_ring_submit_lock(ctx, issue_flags, &lock_state); =20 buf =3D req->kbuf; bl =3D io_buffer_get_list(ctx, buf->bgid); list_add(&buf->list, &bl->buf_list); bl->nbufs++; req->flags &=3D ~REQ_F_BUFFER_SELECTED; =20 - io_ring_submit_unlock(ctx, issue_flags); + io_ring_submit_unlock(ctx, issue_flags, &lock_state); return true; } =20 static void __user *io_provided_buffer_select(struct io_kiocb *req, size_t= *len, struct io_buffer_list *bl) @@ -210,24 +211,25 @@ static struct io_br_sel io_ring_buffer_select(struct = io_kiocb *req, size_t *len, } =20 struct io_br_sel io_buffer_select(struct io_kiocb *req, size_t *len, unsigned buf_group, unsigned int issue_flags) { + struct io_ring_ctx_lock_state lock_state; struct io_ring_ctx *ctx =3D req->ctx; struct io_br_sel sel =3D { }; struct io_buffer_list *bl; =20 - io_ring_submit_lock(req->ctx, issue_flags); + io_ring_submit_lock(req->ctx, issue_flags, &lock_state); =20 bl =3D io_buffer_get_list(ctx, buf_group); if (likely(bl)) { if (bl->flags & IOBL_BUF_RING) sel =3D io_ring_buffer_select(req, len, bl, issue_flags); else sel.addr =3D io_provided_buffer_select(req, len, bl); } - io_ring_submit_unlock(req->ctx, issue_flags); + io_ring_submit_unlock(req->ctx, issue_flags, &lock_state); return sel; } =20 /* cap it at a reasonable 256, will be one page even for 4K */ #define PEEK_MAX_IMPORT 256 @@ -315,14 +317,15 @@ static int io_ring_buffers_peek(struct io_kiocb *req,= struct buf_sel_arg *arg, } =20 int io_buffers_select(struct io_kiocb *req, struct buf_sel_arg *arg, struct io_br_sel *sel, unsigned int issue_flags) { + struct io_ring_ctx_lock_state lock_state; struct io_ring_ctx *ctx =3D req->ctx; int ret =3D -ENOENT; =20 - io_ring_submit_lock(ctx, issue_flags); + io_ring_submit_lock(ctx, issue_flags, &lock_state); sel->buf_list =3D io_buffer_get_list(ctx, arg->buf_group); if (unlikely(!sel->buf_list)) goto out_unlock; =20 if (sel->buf_list->flags & IOBL_BUF_RING) { @@ -342,11 +345,11 @@ int io_buffers_select(struct io_kiocb *req, struct bu= f_sel_arg *arg, ret =3D io_provided_buffers_select(req, &arg->out_len, sel->buf_list, ar= g->iovs); } out_unlock: if (issue_flags & IO_URING_F_UNLOCKED) { sel->buf_list =3D NULL; - mutex_unlock(&ctx->uring_lock); + io_ring_ctx_unlock(ctx, &lock_state); } return ret; } =20 int io_buffers_peek(struct io_kiocb *req, struct buf_sel_arg *arg, @@ -354,11 +357,11 @@ int io_buffers_peek(struct io_kiocb *req, struct buf_= sel_arg *arg, { struct io_ring_ctx *ctx =3D req->ctx; struct io_buffer_list *bl; int ret; =20 - lockdep_assert_held(&ctx->uring_lock); + io_ring_ctx_assert_locked(ctx); =20 bl =3D io_buffer_get_list(ctx, arg->buf_group); if (unlikely(!bl)) return -ENOENT; =20 @@ -409,12 +412,16 @@ static int io_remove_buffers_legacy(struct io_ring_ct= x *ctx, unsigned long nbufs) { unsigned long i =3D 0; struct io_buffer *nxt; =20 - /* protects io_buffers_cache */ - lockdep_assert_held(&ctx->uring_lock); + /* + * ctx uring lock protects io_buffers_cache. io_remove_buffers_legacy() + * is called from both issue and io_uring_register() paths, + * but io_ring_ctx_assert_locked() is valid for both. + */ + io_ring_ctx_assert_locked(ctx); WARN_ON_ONCE(bl->flags & IOBL_BUF_RING); =20 for (i =3D 0; i < nbufs && !list_empty(&bl->buf_list); i++) { nxt =3D list_first_entry(&bl->buf_list, struct io_buffer, list); list_del(&nxt->list); @@ -579,18 +586,19 @@ static int __io_manage_buffers_legacy(struct io_kiocb= *req, } =20 int io_manage_buffers_legacy(struct io_kiocb *req, unsigned int issue_flag= s) { struct io_provide_buf *p =3D io_kiocb_to_cmd(req, struct io_provide_buf); + struct io_ring_ctx_lock_state lock_state; struct io_ring_ctx *ctx =3D req->ctx; struct io_buffer_list *bl; int ret; =20 - io_ring_submit_lock(ctx, issue_flags); + io_ring_submit_lock(ctx, issue_flags, &lock_state); bl =3D io_buffer_get_list(ctx, p->bgid); ret =3D __io_manage_buffers_legacy(req, bl); - io_ring_submit_unlock(ctx, issue_flags); + io_ring_submit_unlock(ctx, issue_flags, &lock_state); =20 if (ret < 0) req_set_fail(req); io_req_set_res(req, ret, 0); return IOU_COMPLETE; @@ -604,11 +612,11 @@ int io_register_pbuf_ring(struct io_ring_ctx *ctx, vo= id __user *arg) struct io_uring_buf_ring *br; unsigned long mmap_offset; unsigned long ring_size; int ret; =20 - lockdep_assert_held(&ctx->uring_lock); + io_ring_register_ctx_assert_locked(ctx); =20 if (copy_from_user(®, arg, sizeof(reg))) return -EFAULT; if (!mem_is_zero(reg.resv, sizeof(reg.resv))) return -EINVAL; @@ -680,11 +688,11 @@ int io_register_pbuf_ring(struct io_ring_ctx *ctx, vo= id __user *arg) int io_unregister_pbuf_ring(struct io_ring_ctx *ctx, void __user *arg) { struct io_uring_buf_reg reg; struct io_buffer_list *bl; =20 - lockdep_assert_held(&ctx->uring_lock); + io_ring_register_ctx_assert_locked(ctx); =20 if (copy_from_user(®, arg, sizeof(reg))) return -EFAULT; if (!mem_is_zero(reg.resv, sizeof(reg.resv)) || reg.flags) return -EINVAL; diff --git a/io_uring/memmap.h b/io_uring/memmap.h index a39d9e518905..080285686a05 100644 --- a/io_uring/memmap.h +++ b/io_uring/memmap.h @@ -35,11 +35,11 @@ static inline void io_region_publish(struct io_ring_ctx= *ctx, struct io_mapped_region *src_region, struct io_mapped_region *dst_region) { /* * Once published mmap can find it without holding only the ->mmap_lock - * and not ->uring_lock. + * and not the ctx uring lock. */ guard(mutex)(&ctx->mmap_lock); *dst_region =3D *src_region; } =20 diff --git a/io_uring/msg_ring.c b/io_uring/msg_ring.c index 7063ea7964e7..67fa2fb7e0b9 100644 --- a/io_uring/msg_ring.c +++ b/io_uring/msg_ring.c @@ -30,29 +30,31 @@ struct io_msg { u32 cqe_flags; }; u32 flags; }; =20 -static void io_double_unlock_ctx(struct io_ring_ctx *octx) +static void io_double_unlock_ctx(struct io_ring_ctx *octx, + struct io_ring_ctx_lock_state *lock_state) { - mutex_unlock(&octx->uring_lock); + io_ring_ctx_unlock(octx, lock_state); } =20 static int io_lock_external_ctx(struct io_ring_ctx *octx, - unsigned int issue_flags) + unsigned int issue_flags, + struct io_ring_ctx_lock_state *lock_state) { /* * To ensure proper ordering between the two ctxs, we can only * attempt a trylock on the target. If that fails and we already have * the source ctx lock, punt to io-wq. */ if (!(issue_flags & IO_URING_F_UNLOCKED)) { - if (!mutex_trylock(&octx->uring_lock)) + if (!io_ring_ctx_trylock(octx)) return -EAGAIN; return 0; } - mutex_lock(&octx->uring_lock); + io_ring_ctx_lock(octx, lock_state); return 0; } =20 void io_msg_ring_cleanup(struct io_kiocb *req) { @@ -116,10 +118,11 @@ static int io_msg_data_remote(struct io_ring_ctx *tar= get_ctx, } =20 static int __io_msg_ring_data(struct io_ring_ctx *target_ctx, struct io_msg *msg, unsigned int issue_flags) { + struct io_ring_ctx_lock_state lock_state; u32 flags =3D 0; int ret; =20 if (msg->src_fd || msg->flags & ~IORING_MSG_RING_FLAGS_PASS) return -EINVAL; @@ -134,17 +137,18 @@ static int __io_msg_ring_data(struct io_ring_ctx *tar= get_ctx, if (msg->flags & IORING_MSG_RING_FLAGS_PASS) flags =3D msg->cqe_flags; =20 ret =3D -EOVERFLOW; if (target_ctx->flags & IORING_SETUP_IOPOLL) { - if (unlikely(io_lock_external_ctx(target_ctx, issue_flags))) + if (unlikely(io_lock_external_ctx(target_ctx, issue_flags, + &lock_state))) return -EAGAIN; } if (io_post_aux_cqe(target_ctx, msg->user_data, msg->len, flags)) ret =3D 0; if (target_ctx->flags & IORING_SETUP_IOPOLL) - io_double_unlock_ctx(target_ctx); + io_double_unlock_ctx(target_ctx, &lock_state); return ret; } =20 static int io_msg_ring_data(struct io_kiocb *req, unsigned int issue_flags) { @@ -155,35 +159,38 @@ static int io_msg_ring_data(struct io_kiocb *req, uns= igned int issue_flags) } =20 static int io_msg_grab_file(struct io_kiocb *req, unsigned int issue_flags) { struct io_msg *msg =3D io_kiocb_to_cmd(req, struct io_msg); + struct io_ring_ctx_lock_state lock_state; struct io_ring_ctx *ctx =3D req->ctx; struct io_rsrc_node *node; int ret =3D -EBADF; =20 - io_ring_submit_lock(ctx, issue_flags); + io_ring_submit_lock(ctx, issue_flags, &lock_state); node =3D io_rsrc_node_lookup(&ctx->file_table.data, msg->src_fd); if (node) { msg->src_file =3D io_slot_file(node); if (msg->src_file) get_file(msg->src_file); req->flags |=3D REQ_F_NEED_CLEANUP; ret =3D 0; } - io_ring_submit_unlock(ctx, issue_flags); + io_ring_submit_unlock(ctx, issue_flags, &lock_state); return ret; } =20 static int io_msg_install_complete(struct io_kiocb *req, unsigned int issu= e_flags) { struct io_ring_ctx *target_ctx =3D req->file->private_data; struct io_msg *msg =3D io_kiocb_to_cmd(req, struct io_msg); + struct io_ring_ctx_lock_state lock_state; struct file *src_file =3D msg->src_file; int ret; =20 - if (unlikely(io_lock_external_ctx(target_ctx, issue_flags))) + if (unlikely(io_lock_external_ctx(target_ctx, issue_flags, + &lock_state))) return -EAGAIN; =20 ret =3D __io_fixed_fd_install(target_ctx, src_file, msg->dst_fd); if (ret < 0) goto out_unlock; @@ -200,11 +207,11 @@ static int io_msg_install_complete(struct io_kiocb *r= eq, unsigned int issue_flag * later IORING_OP_MSG_RING delivers the message. */ if (!io_post_aux_cqe(target_ctx, msg->user_data, ret, 0)) ret =3D -EOVERFLOW; out_unlock: - io_double_unlock_ctx(target_ctx); + io_double_unlock_ctx(target_ctx, &lock_state); return ret; } =20 static void io_msg_tw_fd_complete(struct callback_head *head) { diff --git a/io_uring/notif.c b/io_uring/notif.c index f476775ba44b..8099b87af588 100644 --- a/io_uring/notif.c +++ b/io_uring/notif.c @@ -15,11 +15,11 @@ static void io_notif_tw_complete(struct io_tw_req tw_re= q, io_tw_token_t tw) { struct io_kiocb *notif =3D tw_req.req; struct io_notif_data *nd =3D io_notif_to_data(notif); struct io_ring_ctx *ctx =3D notif->ctx; =20 - lockdep_assert_held(&ctx->uring_lock); + io_ring_ctx_assert_locked(ctx); =20 do { notif =3D cmd_to_io_kiocb(nd); =20 if (WARN_ON_ONCE(ctx !=3D notif->ctx)) @@ -109,15 +109,16 @@ static const struct ubuf_info_ops io_ubuf_ops =3D { .complete =3D io_tx_ubuf_complete, .link_skb =3D io_link_skb, }; =20 struct io_kiocb *io_alloc_notif(struct io_ring_ctx *ctx) - __must_hold(&ctx->uring_lock) { struct io_kiocb *notif; struct io_notif_data *nd; =20 + io_ring_ctx_assert_locked(ctx); + if (unlikely(!io_alloc_req(ctx, ¬if))) return NULL; notif->ctx =3D ctx; notif->opcode =3D IORING_OP_NOP; notif->flags =3D 0; diff --git a/io_uring/notif.h b/io_uring/notif.h index f3589cfef4a9..c33c9a1179c9 100644 --- a/io_uring/notif.h +++ b/io_uring/notif.h @@ -31,14 +31,15 @@ static inline struct io_notif_data *io_notif_to_data(st= ruct io_kiocb *notif) { return io_kiocb_to_cmd(notif, struct io_notif_data); } =20 static inline void io_notif_flush(struct io_kiocb *notif) - __must_hold(¬if->ctx->uring_lock) { struct io_notif_data *nd =3D io_notif_to_data(notif); =20 + io_ring_ctx_assert_locked(notif->ctx); + io_tx_ubuf_complete(NULL, &nd->uarg, true); } =20 static inline int io_notif_account_mem(struct io_kiocb *notif, unsigned le= n) { diff --git a/io_uring/openclose.c b/io_uring/openclose.c index bfeb91b31bba..432a7a68eec1 100644 --- a/io_uring/openclose.c +++ b/io_uring/openclose.c @@ -189,15 +189,16 @@ void io_open_cleanup(struct io_kiocb *req) } =20 int __io_close_fixed(struct io_ring_ctx *ctx, unsigned int issue_flags, unsigned int offset) { + struct io_ring_ctx_lock_state lock_state; int ret; =20 - io_ring_submit_lock(ctx, issue_flags); + io_ring_submit_lock(ctx, issue_flags, &lock_state); ret =3D io_fixed_fd_remove(ctx, offset); - io_ring_submit_unlock(ctx, issue_flags); + io_ring_submit_unlock(ctx, issue_flags, &lock_state); =20 return ret; } =20 static inline int io_close_fixed(struct io_kiocb *req, unsigned int issue_= flags) @@ -333,18 +334,19 @@ int io_pipe_prep(struct io_kiocb *req, const struct i= o_uring_sqe *sqe) =20 static int io_pipe_fixed(struct io_kiocb *req, struct file **files, unsigned int issue_flags) { struct io_pipe *p =3D io_kiocb_to_cmd(req, struct io_pipe); + struct io_ring_ctx_lock_state lock_state; struct io_ring_ctx *ctx =3D req->ctx; int ret, fds[2] =3D { -1, -1 }; int slot =3D p->file_slot; =20 if (p->flags & O_CLOEXEC) return -EINVAL; =20 - io_ring_submit_lock(ctx, issue_flags); + io_ring_submit_lock(ctx, issue_flags, &lock_state); =20 ret =3D __io_fixed_fd_install(ctx, files[0], slot); if (ret < 0) goto err; fds[0] =3D ret; @@ -361,23 +363,23 @@ static int io_pipe_fixed(struct io_kiocb *req, struct= file **files, if (ret < 0) goto err; fds[1] =3D ret; files[1] =3D NULL; =20 - io_ring_submit_unlock(ctx, issue_flags); + io_ring_submit_unlock(ctx, issue_flags, &lock_state); =20 if (!copy_to_user(p->fds, fds, sizeof(fds))) return 0; =20 ret =3D -EFAULT; - io_ring_submit_lock(ctx, issue_flags); + io_ring_submit_lock(ctx, issue_flags, &lock_state); err: if (fds[0] !=3D -1) io_fixed_fd_remove(ctx, fds[0]); if (fds[1] !=3D -1) io_fixed_fd_remove(ctx, fds[1]); - io_ring_submit_unlock(ctx, issue_flags); + io_ring_submit_unlock(ctx, issue_flags, &lock_state); return ret; } =20 static int io_pipe_fd(struct io_kiocb *req, struct file **files) { diff --git a/io_uring/poll.c b/io_uring/poll.c index 8aa4e3a31e73..3af45bef85f6 100644 --- a/io_uring/poll.c +++ b/io_uring/poll.c @@ -121,11 +121,11 @@ static struct io_poll *io_poll_get_single(struct io_k= iocb *req) static void io_poll_req_insert(struct io_kiocb *req) { struct io_hash_table *table =3D &req->ctx->cancel_table; u32 index =3D hash_long(req->cqe.user_data, table->hash_bits); =20 - lockdep_assert_held(&req->ctx->uring_lock); + io_ring_ctx_assert_locked(req->ctx); =20 hlist_add_head(&req->hash_node, &table->hbs[index].list); } =20 static void io_init_poll_iocb(struct io_poll *poll, __poll_t events) @@ -321,11 +321,11 @@ void io_poll_task_func(struct io_tw_req tw_req, io_tw= _token_t tw) } else if (ret =3D=3D IOU_POLL_REQUEUE) { __io_poll_execute(req, 0); return; } io_poll_remove_entries(req); - /* task_work always has ->uring_lock held */ + /* task_work always holds ctx uring lock */ hash_del(&req->hash_node); =20 if (req->opcode =3D=3D IORING_OP_POLL_ADD) { if (ret =3D=3D IOU_POLL_DONE) { struct io_poll *poll; @@ -524,15 +524,16 @@ static bool io_poll_can_finish_inline(struct io_kiocb= *req, return pt->owning || io_poll_get_ownership(req); } =20 static void io_poll_add_hash(struct io_kiocb *req, unsigned int issue_flag= s) { + struct io_ring_ctx_lock_state lock_state; struct io_ring_ctx *ctx =3D req->ctx; =20 - io_ring_submit_lock(ctx, issue_flags); + io_ring_submit_lock(ctx, issue_flags, &lock_state); io_poll_req_insert(req); - io_ring_submit_unlock(ctx, issue_flags); + io_ring_submit_unlock(ctx, issue_flags, &lock_state); } =20 /* * Returns 0 when it's handed over for polling. The caller owns the reques= ts if * it returns non-zero, but otherwise should not touch it. Negative values @@ -727,11 +728,11 @@ __cold bool io_poll_remove_all(struct io_ring_ctx *ct= x, struct io_uring_task *tc struct hlist_node *tmp; struct io_kiocb *req; bool found =3D false; int i; =20 - lockdep_assert_held(&ctx->uring_lock); + io_ring_ctx_assert_locked(ctx); =20 for (i =3D 0; i < nr_buckets; i++) { struct io_hash_bucket *hb =3D &ctx->cancel_table.hbs[i]; =20 hlist_for_each_entry_safe(req, tmp, &hb->list, hash_node) { @@ -813,15 +814,16 @@ static int __io_poll_cancel(struct io_ring_ctx *ctx, = struct io_cancel_data *cd) } =20 int io_poll_cancel(struct io_ring_ctx *ctx, struct io_cancel_data *cd, unsigned issue_flags) { + struct io_ring_ctx_lock_state lock_state; int ret; =20 - io_ring_submit_lock(ctx, issue_flags); + io_ring_submit_lock(ctx, issue_flags, &lock_state); ret =3D __io_poll_cancel(ctx, cd); - io_ring_submit_unlock(ctx, issue_flags); + io_ring_submit_unlock(ctx, issue_flags, &lock_state); return ret; } =20 static __poll_t io_poll_parse_events(const struct io_uring_sqe *sqe, unsigned int flags) @@ -904,16 +906,17 @@ int io_poll_add(struct io_kiocb *req, unsigned int is= sue_flags) } =20 int io_poll_remove(struct io_kiocb *req, unsigned int issue_flags) { struct io_poll_update *poll_update =3D io_kiocb_to_cmd(req, struct io_pol= l_update); + struct io_ring_ctx_lock_state lock_state; struct io_ring_ctx *ctx =3D req->ctx; struct io_cancel_data cd =3D { .ctx =3D ctx, .data =3D poll_update->old_u= ser_data, }; struct io_kiocb *preq; int ret2, ret =3D 0; =20 - io_ring_submit_lock(ctx, issue_flags); + io_ring_submit_lock(ctx, issue_flags, &lock_state); preq =3D io_poll_find(ctx, true, &cd); ret2 =3D io_poll_disarm(preq); if (ret2) { ret =3D ret2; goto out; @@ -944,11 +947,11 @@ int io_poll_remove(struct io_kiocb *req, unsigned int= issue_flags) req_set_fail(preq); io_req_set_res(preq, -ECANCELED, 0); preq->io_task_work.func =3D io_req_task_complete; io_req_task_work_add(preq); out: - io_ring_submit_unlock(ctx, issue_flags); + io_ring_submit_unlock(ctx, issue_flags, &lock_state); if (ret < 0) { req_set_fail(req); return ret; } /* complete update request, we're done with it */ diff --git a/io_uring/register.c b/io_uring/register.c index db42f98562c4..c6d5b6e7c422 100644 --- a/io_uring/register.c +++ b/io_uring/register.c @@ -205,13 +205,13 @@ static __cold int __io_register_iowq_aff(struct io_ri= ng_ctx *ctx, int ret; =20 if (!(ctx->flags & IORING_SETUP_SQPOLL)) { ret =3D io_wq_cpu_affinity(current->io_uring, new_mask); } else { - mutex_unlock(&ctx->uring_lock); + io_ring_register_ctx_unlock(ctx); ret =3D io_sqpoll_wq_cpu_affinity(ctx, new_mask); - mutex_lock(&ctx->uring_lock); + io_ring_register_ctx_lock(ctx); } =20 return ret; } =20 @@ -252,11 +252,11 @@ static __cold int io_unregister_iowq_aff(struct io_ri= ng_ctx *ctx) return __io_register_iowq_aff(ctx, NULL); } =20 static __cold int io_register_iowq_max_workers(struct io_ring_ctx *ctx, void __user *arg) - __must_hold(&ctx->uring_lock) + must_hold_io_ring_register_ctx_lock(ctx) { struct io_tctx_node *node; struct io_uring_task *tctx =3D NULL; struct io_sq_data *sqd =3D NULL; __u32 new_count[2]; @@ -272,18 +272,18 @@ static __cold int io_register_iowq_max_workers(struct= io_ring_ctx *ctx, sqd =3D ctx->sq_data; if (sqd) { struct task_struct *tsk; =20 /* - * Observe the correct sqd->lock -> ctx->uring_lock - * ordering. Fine to drop uring_lock here, we hold + * Observe the correct sqd->lock -> ctx uring lock + * ordering. Fine to drop ctx uring lock here, we hold * a ref to the ctx. */ refcount_inc(&sqd->refs); - mutex_unlock(&ctx->uring_lock); + io_ring_register_ctx_unlock(ctx); mutex_lock(&sqd->lock); - mutex_lock(&ctx->uring_lock); + io_ring_register_ctx_lock(ctx); tsk =3D sqpoll_task_locked(sqd); if (tsk) tctx =3D tsk->io_uring; } } else { @@ -304,14 +304,14 @@ static __cold int io_register_iowq_max_workers(struct= io_ring_ctx *ctx, } else { memset(new_count, 0, sizeof(new_count)); } =20 if (sqd) { - mutex_unlock(&ctx->uring_lock); + io_ring_register_ctx_unlock(ctx); mutex_unlock(&sqd->lock); io_put_sq_data(sqd); - mutex_lock(&ctx->uring_lock); + io_ring_register_ctx_lock(ctx); } =20 if (copy_to_user(arg, new_count, sizeof(new_count))) return -EFAULT; =20 @@ -331,14 +331,14 @@ static __cold int io_register_iowq_max_workers(struct= io_ring_ctx *ctx, (void)io_wq_max_workers(tctx->io_wq, new_count); } return 0; err: if (sqd) { - mutex_unlock(&ctx->uring_lock); + io_ring_register_ctx_unlock(ctx); mutex_unlock(&sqd->lock); io_put_sq_data(sqd); - mutex_lock(&ctx->uring_lock); + io_ring_register_ctx_lock(ctx); } return ret; } =20 static int io_register_clock(struct io_ring_ctx *ctx, @@ -468,13 +468,13 @@ static int io_register_resize_rings(struct io_ring_ct= x *ctx, void __user *arg) =20 /* * If using SQPOLL, park the thread */ if (ctx->sq_data) { - mutex_unlock(&ctx->uring_lock); + io_ring_register_ctx_unlock(ctx); io_sq_thread_park(ctx->sq_data); - mutex_lock(&ctx->uring_lock); + io_ring_register_ctx_lock(ctx); } =20 /* * We'll do the swap. Grab the ctx->mmap_lock, which will exclude * any new mmap's on the ring fd. Clear out existing mappings to prevent @@ -606,12 +606,12 @@ static int io_register_mem_region(struct io_ring_ctx = *ctx, void __user *uarg) return 0; } =20 static int __io_uring_register(struct io_ring_ctx *ctx, unsigned opcode, void __user *arg, unsigned nr_args) - __releases(ctx->uring_lock) - __acquires(ctx->uring_lock) + releases_io_ring_register_ctx_lock(ctx) + acquires_io_ring_register_ctx_lock(ctx) { int ret; =20 /* * We don't quiesce the refs for register anymore and so it can't be @@ -913,15 +913,15 @@ SYSCALL_DEFINE4(io_uring_register, unsigned int, fd, = unsigned int, opcode, file =3D io_uring_register_get_file(fd, use_registered_ring); if (IS_ERR(file)) return PTR_ERR(file); ctx =3D file->private_data; =20 - mutex_lock(&ctx->uring_lock); + io_ring_register_ctx_lock(ctx); ret =3D __io_uring_register(ctx, opcode, arg, nr_args); =20 trace_io_uring_register(ctx, opcode, ctx->file_table.data.nr, ctx->buf_table.nr, ret); - mutex_unlock(&ctx->uring_lock); + io_ring_register_ctx_unlock(ctx); =20 fput(file); return ret; } diff --git a/io_uring/rsrc.c b/io_uring/rsrc.c index 3765a50329a8..ef2c75871f46 100644 --- a/io_uring/rsrc.c +++ b/io_uring/rsrc.c @@ -349,11 +349,11 @@ static int __io_register_rsrc_update(struct io_ring_c= tx *ctx, unsigned type, struct io_uring_rsrc_update2 *up, unsigned nr_args) { __u32 tmp; =20 - lockdep_assert_held(&ctx->uring_lock); + io_ring_ctx_assert_locked(ctx); =20 if (check_add_overflow(up->offset, nr_args, &tmp)) return -EOVERFLOW; =20 switch (type) { @@ -497,14 +497,16 @@ int io_files_update(struct io_kiocb *req, unsigned in= t issue_flags) up2.resv2 =3D 0; =20 if (up->offset =3D=3D IORING_FILE_INDEX_ALLOC) { ret =3D io_files_update_with_index_alloc(req, issue_flags); } else { - io_ring_submit_lock(ctx, issue_flags); + struct io_ring_ctx_lock_state lock_state; + + io_ring_submit_lock(ctx, issue_flags, &lock_state); ret =3D __io_register_rsrc_update(ctx, IORING_RSRC_FILE, &up2, up->nr_args); - io_ring_submit_unlock(ctx, issue_flags); + io_ring_submit_unlock(ctx, issue_flags, &lock_state); } =20 if (ret < 0) req_set_fail(req); io_req_set_res(req, ret, 0); @@ -940,18 +942,19 @@ int io_buffer_register_bvec(struct io_uring_cmd *cmd,= struct request *rq, void (*release)(void *), unsigned int index, unsigned int issue_flags) { struct io_ring_ctx *ctx =3D cmd_to_io_kiocb(cmd)->ctx; struct io_rsrc_data *data =3D &ctx->buf_table; + struct io_ring_ctx_lock_state lock_state; struct req_iterator rq_iter; struct io_mapped_ubuf *imu; struct io_rsrc_node *node; struct bio_vec bv; unsigned int nr_bvecs =3D 0; int ret =3D 0; =20 - io_ring_submit_lock(ctx, issue_flags); + io_ring_submit_lock(ctx, issue_flags, &lock_state); if (index >=3D data->nr) { ret =3D -EINVAL; goto unlock; } index =3D array_index_nospec(index, data->nr); @@ -993,24 +996,25 @@ int io_buffer_register_bvec(struct io_uring_cmd *cmd,= struct request *rq, imu->nr_bvecs =3D nr_bvecs; =20 node->buf =3D imu; data->nodes[index] =3D node; unlock: - io_ring_submit_unlock(ctx, issue_flags); + io_ring_submit_unlock(ctx, issue_flags, &lock_state); return ret; } EXPORT_SYMBOL_GPL(io_buffer_register_bvec); =20 int io_buffer_unregister_bvec(struct io_uring_cmd *cmd, unsigned int index, unsigned int issue_flags) { struct io_ring_ctx *ctx =3D cmd_to_io_kiocb(cmd)->ctx; struct io_rsrc_data *data =3D &ctx->buf_table; + struct io_ring_ctx_lock_state lock_state; struct io_rsrc_node *node; int ret =3D 0; =20 - io_ring_submit_lock(ctx, issue_flags); + io_ring_submit_lock(ctx, issue_flags, &lock_state); if (index >=3D data->nr) { ret =3D -EINVAL; goto unlock; } index =3D array_index_nospec(index, data->nr); @@ -1026,11 +1030,11 @@ int io_buffer_unregister_bvec(struct io_uring_cmd *= cmd, unsigned int index, } =20 io_put_rsrc_node(ctx, node); data->nodes[index] =3D NULL; unlock: - io_ring_submit_unlock(ctx, issue_flags); + io_ring_submit_unlock(ctx, issue_flags, &lock_state); return ret; } EXPORT_SYMBOL_GPL(io_buffer_unregister_bvec); =20 static int validate_fixed_range(u64 buf_addr, size_t len, @@ -1117,27 +1121,28 @@ static int io_import_fixed(int ddir, struct iov_ite= r *iter, } =20 inline struct io_rsrc_node *io_find_buf_node(struct io_kiocb *req, unsigned issue_flags) { + struct io_ring_ctx_lock_state lock_state; struct io_ring_ctx *ctx =3D req->ctx; struct io_rsrc_node *node; =20 if (req->flags & REQ_F_BUF_NODE) return req->buf_node; req->flags |=3D REQ_F_BUF_NODE; =20 - io_ring_submit_lock(ctx, issue_flags); + io_ring_submit_lock(ctx, issue_flags, &lock_state); node =3D io_rsrc_node_lookup(&ctx->buf_table, req->buf_index); if (node) { node->refs++; req->buf_node =3D node; - io_ring_submit_unlock(ctx, issue_flags); + io_ring_submit_unlock(ctx, issue_flags, &lock_state); return node; } req->flags &=3D ~REQ_F_BUF_NODE; - io_ring_submit_unlock(ctx, issue_flags); + io_ring_submit_unlock(ctx, issue_flags, &lock_state); return NULL; } =20 int io_import_reg_buf(struct io_kiocb *req, struct iov_iter *iter, u64 buf_addr, size_t len, int ddir, @@ -1154,24 +1159,24 @@ int io_import_reg_buf(struct io_kiocb *req, struct = iov_iter *iter, /* Lock two rings at once. The rings must be different! */ static void lock_two_rings(struct io_ring_ctx *ctx1, struct io_ring_ctx *c= tx2) { if (ctx1 > ctx2) swap(ctx1, ctx2); - mutex_lock(&ctx1->uring_lock); - mutex_lock_nested(&ctx2->uring_lock, SINGLE_DEPTH_NESTING); + io_ring_register_ctx_lock(ctx1); + io_ring_register_ctx_lock_nested(ctx2, SINGLE_DEPTH_NESTING); } =20 /* Both rings are locked by the caller. */ static int io_clone_buffers(struct io_ring_ctx *ctx, struct io_ring_ctx *s= rc_ctx, struct io_uring_clone_buffers *arg) { struct io_rsrc_data data; int i, ret, off, nr; unsigned int nbufs; =20 - lockdep_assert_held(&ctx->uring_lock); - lockdep_assert_held(&src_ctx->uring_lock); + io_ring_register_ctx_assert_locked(ctx); + io_ring_register_ctx_assert_locked(src_ctx); =20 /* * Accounting state is shared between the two rings; that only works if * both rings are accounted towards the same counters. */ @@ -1300,11 +1305,11 @@ int io_register_clone_buffers(struct io_ring_ctx *c= tx, void __user *arg) if (IS_ERR(file)) return PTR_ERR(file); =20 src_ctx =3D file->private_data; if (src_ctx !=3D ctx) { - mutex_unlock(&ctx->uring_lock); + io_ring_register_ctx_unlock(ctx); lock_two_rings(ctx, src_ctx); =20 if (src_ctx->submitter_task && src_ctx->submitter_task !=3D current) { ret =3D -EEXIST; @@ -1314,11 +1319,11 @@ int io_register_clone_buffers(struct io_ring_ctx *c= tx, void __user *arg) =20 ret =3D io_clone_buffers(ctx, src_ctx, &buf); =20 out: if (src_ctx !=3D ctx) - mutex_unlock(&src_ctx->uring_lock); + io_ring_register_ctx_unlock(src_ctx); =20 fput(file); return ret; } =20 diff --git a/io_uring/rsrc.h b/io_uring/rsrc.h index d603f6a47f5e..de03296439dc 100644 --- a/io_uring/rsrc.h +++ b/io_uring/rsrc.h @@ -2,10 +2,11 @@ #ifndef IOU_RSRC_H #define IOU_RSRC_H =20 #include #include +#include "io_uring.h" =20 #define IO_VEC_CACHE_SOFT_CAP 256 =20 enum { IORING_RSRC_FILE =3D 0, @@ -97,11 +98,11 @@ static inline struct io_rsrc_node *io_rsrc_node_lookup(= struct io_rsrc_data *data return NULL; } =20 static inline void io_put_rsrc_node(struct io_ring_ctx *ctx, struct io_rsr= c_node *node) { - lockdep_assert_held(&ctx->uring_lock); + io_ring_ctx_assert_locked(ctx); if (!--node->refs) io_free_rsrc_node(ctx, node); } =20 static inline bool io_reset_rsrc_node(struct io_ring_ctx *ctx, diff --git a/io_uring/rw.c b/io_uring/rw.c index a7b568c3dfe8..8ed1ba1b462b 100644 --- a/io_uring/rw.c +++ b/io_uring/rw.c @@ -463,11 +463,11 @@ int io_read_mshot_prep(struct io_kiocb *req, const st= ruct io_uring_sqe *sqe) =20 void io_readv_writev_cleanup(struct io_kiocb *req) { struct io_async_rw *rw =3D req->async_data; =20 - lockdep_assert_held(&req->ctx->uring_lock); + io_ring_ctx_assert_locked(req->ctx); io_vec_free(&rw->vec); io_rw_recycle(req, 0); } =20 static inline loff_t *io_kiocb_update_pos(struct io_kiocb *req) diff --git a/io_uring/splice.c b/io_uring/splice.c index e81ebbb91925..567695c39091 100644 --- a/io_uring/splice.c +++ b/io_uring/splice.c @@ -58,26 +58,27 @@ void io_splice_cleanup(struct io_kiocb *req) =20 static struct file *io_splice_get_file(struct io_kiocb *req, unsigned int issue_flags) { struct io_splice *sp =3D io_kiocb_to_cmd(req, struct io_splice); + struct io_ring_ctx_lock_state lock_state; struct io_ring_ctx *ctx =3D req->ctx; struct io_rsrc_node *node; struct file *file =3D NULL; =20 if (!(sp->flags & SPLICE_F_FD_IN_FIXED)) return io_file_get_normal(req, sp->splice_fd_in); =20 - io_ring_submit_lock(ctx, issue_flags); + io_ring_submit_lock(ctx, issue_flags, &lock_state); node =3D io_rsrc_node_lookup(&ctx->file_table.data, sp->splice_fd_in); if (node) { node->refs++; sp->rsrc_node =3D node; file =3D io_slot_file(node); req->flags |=3D REQ_F_NEED_CLEANUP; } - io_ring_submit_unlock(ctx, issue_flags); + io_ring_submit_unlock(ctx, issue_flags, &lock_state); return file; } =20 int io_tee(struct io_kiocb *req, unsigned int issue_flags) { diff --git a/io_uring/sqpoll.c b/io_uring/sqpoll.c index 74c1a130cd87..0b4573b53cf3 100644 --- a/io_uring/sqpoll.c +++ b/io_uring/sqpoll.c @@ -211,29 +211,30 @@ static int __io_sq_thread(struct io_ring_ctx *ctx, st= ruct io_sq_data *sqd, /* if we're handling multiple rings, cap submit size for fairness */ if (cap_entries && to_submit > IORING_SQPOLL_CAP_ENTRIES_VALUE) to_submit =3D IORING_SQPOLL_CAP_ENTRIES_VALUE; =20 if (to_submit || !wq_list_empty(&ctx->iopoll_list)) { + struct io_ring_ctx_lock_state lock_state; const struct cred *creds =3D NULL; =20 io_sq_start_worktime(ist); =20 if (ctx->sq_creds !=3D current_cred()) creds =3D override_creds(ctx->sq_creds); =20 - mutex_lock(&ctx->uring_lock); + io_ring_ctx_lock(ctx, &lock_state); if (!wq_list_empty(&ctx->iopoll_list)) io_do_iopoll(ctx, true); =20 /* * Don't submit if refs are dying, good for io_uring_register(), * but also it is relied upon by io_ring_exit_work() */ if (to_submit && likely(!percpu_ref_is_dying(&ctx->refs)) && !(ctx->flags & IORING_SETUP_R_DISABLED)) ret =3D io_submit_sqes(ctx, to_submit); - mutex_unlock(&ctx->uring_lock); + io_ring_ctx_unlock(ctx, &lock_state); =20 if (to_submit && wq_has_sleeper(&ctx->sqo_sq_wait)) wake_up(&ctx->sqo_sq_wait); if (creds) revert_creds(creds); diff --git a/io_uring/tctx.c b/io_uring/tctx.c index 5b66755579c0..7ac1cb963a72 100644 --- a/io_uring/tctx.c +++ b/io_uring/tctx.c @@ -13,27 +13,28 @@ #include "tctx.h" =20 static struct io_wq *io_init_wq_offload(struct io_ring_ctx *ctx, struct task_struct *task) { + struct io_ring_ctx_lock_state lock_state; struct io_wq_hash *hash; struct io_wq_data data; unsigned int concurrency; =20 - mutex_lock(&ctx->uring_lock); + io_ring_ctx_lock(ctx, &lock_state); hash =3D ctx->hash_map; if (!hash) { hash =3D kzalloc(sizeof(*hash), GFP_KERNEL); if (!hash) { - mutex_unlock(&ctx->uring_lock); + io_ring_ctx_unlock(ctx, &lock_state); return ERR_PTR(-ENOMEM); } refcount_set(&hash->refs, 1); init_waitqueue_head(&hash->wait); ctx->hash_map =3D hash; } - mutex_unlock(&ctx->uring_lock); + io_ring_ctx_unlock(ctx, &lock_state); =20 data.hash =3D hash; data.task =3D task; =20 /* Do QD, or 4 * CPUS, whatever is smallest */ @@ -121,10 +122,12 @@ int __io_uring_add_tctx_node(struct io_ring_ctx *ctx) if (ret) return ret; } } if (!xa_load(&tctx->xa, (unsigned long)ctx)) { + struct io_ring_ctx_lock_state lock_state; + node =3D kmalloc(sizeof(*node), GFP_KERNEL); if (!node) return -ENOMEM; node->ctx =3D ctx; node->task =3D current; @@ -134,13 +137,13 @@ int __io_uring_add_tctx_node(struct io_ring_ctx *ctx) if (ret) { kfree(node); return ret; } =20 - mutex_lock(&ctx->uring_lock); + io_ring_ctx_lock(ctx, &lock_state); list_add(&node->ctx_node, &ctx->tctx_list); - mutex_unlock(&ctx->uring_lock); + io_ring_ctx_unlock(ctx, &lock_state); } return 0; } =20 int __io_uring_add_tctx_node_from_submit(struct io_ring_ctx *ctx) @@ -163,10 +166,11 @@ int __io_uring_add_tctx_node_from_submit(struct io_ri= ng_ctx *ctx) * Remove this io_uring_file -> task mapping. */ __cold void io_uring_del_tctx_node(unsigned long index) { struct io_uring_task *tctx =3D current->io_uring; + struct io_ring_ctx_lock_state lock_state; struct io_tctx_node *node; =20 if (!tctx) return; node =3D xa_erase(&tctx->xa, index); @@ -174,13 +178,13 @@ __cold void io_uring_del_tctx_node(unsigned long inde= x) return; =20 WARN_ON_ONCE(current !=3D node->task); WARN_ON_ONCE(list_empty(&node->ctx_node)); =20 - mutex_lock(&node->ctx->uring_lock); + io_ring_ctx_lock(node->ctx, &lock_state); list_del(&node->ctx_node); - mutex_unlock(&node->ctx->uring_lock); + io_ring_ctx_unlock(node->ctx, &lock_state); =20 if (tctx->last =3D=3D node->ctx) tctx->last =3D NULL; kfree(node); } @@ -196,11 +200,11 @@ __cold void io_uring_clean_tctx(struct io_uring_task = *tctx) cond_resched(); } if (wq) { /* * Must be after io_uring_del_tctx_node() (removes nodes under - * uring_lock) to avoid race with io_uring_try_cancel_iowq(). + * ctx uring lock) to avoid race with io_uring_try_cancel_iowq() */ io_wq_put_and_exit(wq); tctx->io_wq =3D NULL; } } @@ -269,13 +273,13 @@ int io_ringfd_register(struct io_ring_ctx *ctx, void = __user *__arg, int ret, i; =20 if (!nr_args || nr_args > IO_RINGFD_REG_MAX) return -EINVAL; =20 - mutex_unlock(&ctx->uring_lock); + io_ring_register_ctx_unlock(ctx); ret =3D __io_uring_add_tctx_node(ctx); - mutex_lock(&ctx->uring_lock); + io_ring_register_ctx_lock(ctx); if (ret) return ret; =20 tctx =3D current->io_uring; for (i =3D 0; i < nr_args; i++) { diff --git a/io_uring/uring_cmd.c b/io_uring/uring_cmd.c index 197474911f04..a8a128a3f0a2 100644 --- a/io_uring/uring_cmd.c +++ b/io_uring/uring_cmd.c @@ -51,11 +51,11 @@ bool io_uring_try_cancel_uring_cmd(struct io_ring_ctx *= ctx, { struct hlist_node *tmp; struct io_kiocb *req; bool ret =3D false; =20 - lockdep_assert_held(&ctx->uring_lock); + io_ring_ctx_assert_locked(ctx); =20 hlist_for_each_entry_safe(req, tmp, &ctx->cancelable_uring_cmd, hash_node) { struct io_uring_cmd *cmd =3D io_kiocb_to_cmd(req, struct io_uring_cmd); @@ -76,19 +76,20 @@ bool io_uring_try_cancel_uring_cmd(struct io_ring_ctx *= ctx, =20 static void io_uring_cmd_del_cancelable(struct io_uring_cmd *cmd, unsigned int issue_flags) { struct io_kiocb *req =3D cmd_to_io_kiocb(cmd); + struct io_ring_ctx_lock_state lock_state; struct io_ring_ctx *ctx =3D req->ctx; =20 if (!(cmd->flags & IORING_URING_CMD_CANCELABLE)) return; =20 cmd->flags &=3D ~IORING_URING_CMD_CANCELABLE; - io_ring_submit_lock(ctx, issue_flags); + io_ring_submit_lock(ctx, issue_flags, &lock_state); hlist_del(&req->hash_node); - io_ring_submit_unlock(ctx, issue_flags); + io_ring_submit_unlock(ctx, issue_flags, &lock_state); } =20 /* * Mark this command as concelable, then io_uring_try_cancel_uring_cmd() * will try to cancel this issued command by sending ->uring_cmd() with @@ -103,14 +104,16 @@ void io_uring_cmd_mark_cancelable(struct io_uring_cmd= *cmd, { struct io_kiocb *req =3D cmd_to_io_kiocb(cmd); struct io_ring_ctx *ctx =3D req->ctx; =20 if (!(cmd->flags & IORING_URING_CMD_CANCELABLE)) { + struct io_ring_ctx_lock_state lock_state; + cmd->flags |=3D IORING_URING_CMD_CANCELABLE; - io_ring_submit_lock(ctx, issue_flags); + io_ring_submit_lock(ctx, issue_flags, &lock_state); hlist_add_head(&req->hash_node, &ctx->cancelable_uring_cmd); - io_ring_submit_unlock(ctx, issue_flags); + io_ring_submit_unlock(ctx, issue_flags, &lock_state); } } EXPORT_SYMBOL_GPL(io_uring_cmd_mark_cancelable); =20 void __io_uring_cmd_do_in_task(struct io_uring_cmd *ioucmd, diff --git a/io_uring/waitid.c b/io_uring/waitid.c index 2d4cbd47c67c..a69eb1b30b89 100644 --- a/io_uring/waitid.c +++ b/io_uring/waitid.c @@ -130,11 +130,11 @@ static void io_waitid_complete(struct io_kiocb *req, = int ret) struct io_waitid *iw =3D io_kiocb_to_cmd(req, struct io_waitid); =20 /* anyone completing better be holding a reference */ WARN_ON_ONCE(!(atomic_read(&iw->refs) & IO_WAITID_REF_MASK)); =20 - lockdep_assert_held(&req->ctx->uring_lock); + io_ring_ctx_assert_locked(req->ctx); =20 hlist_del_init(&req->hash_node); io_waitid_remove_wq(req); =20 ret =3D io_waitid_finish(req, ret); @@ -145,11 +145,11 @@ static void io_waitid_complete(struct io_kiocb *req, = int ret) =20 static bool __io_waitid_cancel(struct io_kiocb *req) { struct io_waitid *iw =3D io_kiocb_to_cmd(req, struct io_waitid); =20 - lockdep_assert_held(&req->ctx->uring_lock); + io_ring_ctx_assert_locked(req->ctx); =20 /* * Mark us canceled regardless of ownership. This will prevent a * potential retry from a spurious wakeup. */ @@ -280,10 +280,11 @@ int io_waitid_prep(struct io_kiocb *req, const struct= io_uring_sqe *sqe) =20 int io_waitid(struct io_kiocb *req, unsigned int issue_flags) { struct io_waitid *iw =3D io_kiocb_to_cmd(req, struct io_waitid); struct io_waitid_async *iwa =3D req->async_data; + struct io_ring_ctx_lock_state lock_state; struct io_ring_ctx *ctx =3D req->ctx; int ret; =20 ret =3D kernel_waitid_prepare(&iwa->wo, iw->which, iw->upid, &iw->info, iw->options, NULL); @@ -301,11 +302,11 @@ int io_waitid(struct io_kiocb *req, unsigned int issu= e_flags) * Cancel must hold the ctx lock, so there's no risk of cancelation * finding us until a) we remain on the list, and b) the lock is * dropped. We only need to worry about racing with the wakeup * callback. */ - io_ring_submit_lock(ctx, issue_flags); + io_ring_submit_lock(ctx, issue_flags, &lock_state); =20 /* * iw->head is valid under the ring lock, and as long as the request * is on the waitid_list where cancelations may find it. */ @@ -321,27 +322,27 @@ int io_waitid(struct io_kiocb *req, unsigned int issu= e_flags) /* * Nobody else grabbed a reference, it'll complete when we get * a waitqueue callback, or if someone cancels it. */ if (!io_waitid_drop_issue_ref(req)) { - io_ring_submit_unlock(ctx, issue_flags); + io_ring_submit_unlock(ctx, issue_flags, &lock_state); return IOU_ISSUE_SKIP_COMPLETE; } =20 /* * Wakeup triggered, racing with us. It was prevented from * completing because of that, queue up the tw to do that. */ - io_ring_submit_unlock(ctx, issue_flags); + io_ring_submit_unlock(ctx, issue_flags, &lock_state); return IOU_ISSUE_SKIP_COMPLETE; } =20 hlist_del_init(&req->hash_node); io_waitid_remove_wq(req); ret =3D io_waitid_finish(req, ret); =20 - io_ring_submit_unlock(ctx, issue_flags); + io_ring_submit_unlock(ctx, issue_flags, &lock_state); done: if (ret < 0) req_set_fail(req); io_req_set_res(req, ret, 0); return IOU_COMPLETE; diff --git a/io_uring/zcrx.c b/io_uring/zcrx.c index b99cf2c6670a..f2ed49bbad63 100644 --- a/io_uring/zcrx.c +++ b/io_uring/zcrx.c @@ -851,11 +851,11 @@ static struct net_iov *__io_zcrx_get_free_niov(struct= io_zcrx_area *area) =20 void io_unregister_zcrx_ifqs(struct io_ring_ctx *ctx) { struct io_zcrx_ifq *ifq; =20 - lockdep_assert_held(&ctx->uring_lock); + io_ring_ctx_assert_locked(ctx); =20 while (1) { scoped_guard(mutex, &ctx->mmap_lock) { unsigned long id =3D 0; =20 --=20 2.45.2 From nobody Mon Dec 1 23:34:48 2025 Received: from mail-qk1-f225.google.com (mail-qk1-f225.google.com [209.85.222.225]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A0B3C2F28FC for ; Tue, 25 Nov 2025 23:39:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.222.225 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764113985; cv=none; b=pkpOcVYZ4D+dcE20Ea5b1vslMFSvqeVSfrsu89F+W0Nvj0/9PYtV+39oTBu2h+tS36WX9XtY/WRnE0HLB3pLv7x2jkJr/Amp4aV86IDVoC2oTOJEC53rtIswrvX4qS40DzdfZ6VoRskr5rFtcOv3ParMqw6gAw+lOisF8O3lN4k= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764113985; c=relaxed/simple; bh=Tc9M7XOLzN3MKCvdvFx/ylZfkBG+8HRcgSWv13XC13k=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=ezD+ZwLNgG4wdiddltQsbQ8T98CHU0c8Uxjs+GTCe+p4fVfkTmc8m4sk+Uy695SdZpH24c6iuBw12IHaalmukxdjbaYocggBueiM1qta+DqZH8hY3nBhXKY3FegWshNBzMX2wm/n2Rtg2MKqvwcXVP4OYRFjdX/oHzVCLDgH9T0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=purestorage.com; spf=fail smtp.mailfrom=purestorage.com; dkim=pass (2048-bit key) header.d=purestorage.com header.i=@purestorage.com header.b=HCDhzo4Y; arc=none smtp.client-ip=209.85.222.225 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=purestorage.com Authentication-Results: smtp.subspace.kernel.org; spf=fail smtp.mailfrom=purestorage.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=purestorage.com header.i=@purestorage.com header.b="HCDhzo4Y" Received: by mail-qk1-f225.google.com with SMTP id af79cd13be357-8b2e2500517so116065685a.1 for ; Tue, 25 Nov 2025 15:39:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=purestorage.com; s=google2022; t=1764113982; x=1764718782; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=uyOg0feMqAK4/apdTc/BWDHfN53hYCUTgX4vLDoCsVk=; b=HCDhzo4YF9AoNARL+ZcqmCFr+3W3zlR17LUERaHE5IRyn0Ab0+Q34V+qaiUoblYA50 CkUVd6cGuEV7qc2PiOQsaAD6KSK2yk3mJa47FQeXjLXNTA36Rdo1eCSoLbyn9s/93DfJ Lv2bj5FsujwVmuZr7+bVrIEOGa9A/OBVNIs+voTZh98hlgoRr5w5mcREVMjGgKWLGGUx XYIinTMilPjOkTwnMigw9Bf55BWYZS1dW6oBN+0CGVMIuOj+dIbiIn9FuCC6fHLsBslG UPQe3kDLy9vv6jzA7aAl+MZzYcFusFyP0dRYGrp1U7M9Ecmn3iyvyUz30D/EAk4fqdyw 9Ikw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1764113982; x=1764718782; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=uyOg0feMqAK4/apdTc/BWDHfN53hYCUTgX4vLDoCsVk=; b=vCXugYscaMeh+ehuxl/kq1FSZrN1CvTKveh/naLE0sgPtKxclNoELGLtkHI4B+9aFc rqT9xgCH6n2fk7VJTgROmgzA+hVBQ//jJpqFrt6vKuewsaPlAr9CndTih4PgB614noh5 UPEJbfFmOoILx8BmDURCGKMDnkR/X4iQB+FgS5NP2eIJzicftJ7fJMyAHqRRgVkOMENu fUwF6iGibXAIgKrqZCj3kPRaaMpBcAP6gWKHklPle5xa1E6uIq/mUqgOs4/vHFoOAea6 unc+v7JUJJv5unHVVohwm1d0LxstEflYrx3O9hyQobgdz/xpykU838iNgk+3eJlHyOOm 5PfQ== X-Forwarded-Encrypted: i=1; AJvYcCUgM1R9UPUzr8o7NQBaw/3gvEXKV7pwPnu1QG2vSFg/UYxQVokgkWFTTcY2R4LXHIlmnYkQVMHaaJvV9Ww=@vger.kernel.org X-Gm-Message-State: AOJu0YwMi7rPFtz3MoC2BsCyAT2DwWCDCkwdi4o3jP7vdDWNMeyq5srT smHk3angFdZA9Ht+l6LZ+dFkUJR31hqNzu1eam3HO7oTz/Fe0C2MPHCVck9lr2py5RiznoPWUOV cHNLfXmiee1a5AC3Hzk+mMqMcitrymTDJfO16CRk515YEuS98EgGE X-Gm-Gg: ASbGncvfY+1+4XtKwLZXLlmBba/Ai+1b46roJ6V95eZ8RIS5jyGtcMO0AXr+2A31Pyj +AHWNkS9mp3mplKbf0oBbm4K5EfQxS9yuR/eFFb6nPch85dYkcpvq+mrQWoYPIsMoRtvRcqCFE4 DJBxt+D9c7jwvGLMfH+AY3sRLzd8GcFl1mZ39of6ZanN66PmDgEmDpPdiAn3PuEuCVWe9IwInFa Y4zuIJEYQBb7u3yFnUIyaId/jpr1z22uMAemDlUChksmyXCW6uFHVu559GGN9W/Yd+Psg9dRtkD kKsXQidPyfSp2v0JcAUTU0l2emHve8ndxYecHKhbdlyJHbwiUxe8x0rxf6XjoSL6LdQR4R9+w9W WMDhu7C5Vmpa9K6jh1kuNx0FD7CE= X-Google-Smtp-Source: AGHT+IEqcoLxP9btPk8rF40Uuvw6WMHO2Hop7Z9kE77zxfDpmhyzspv5F1buVGRy8r+tzzqM1cup7mKYmwKj X-Received: by 2002:a05:620a:444e:b0:8a3:d644:6930 with SMTP id af79cd13be357-8b341ccaae3mr1675591485a.5.1764113982406; Tue, 25 Nov 2025 15:39:42 -0800 (PST) Received: from c7-smtp-2023.dev.purestorage.com ([2620:125:9017:12:36:3:5:0]) by smtp-relay.gmail.com with ESMTPS id af79cd13be357-8b3293255cbsm176858485a.2.2025.11.25.15.39.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 25 Nov 2025 15:39:42 -0800 (PST) X-Relaying-Domain: purestorage.com Received: from dev-csander.dev.purestorage.com (unknown [IPv6:2620:125:9007:640:ffff::1199]) by c7-smtp-2023.dev.purestorage.com (Postfix) with ESMTP id 3FBE33400AF; Tue, 25 Nov 2025 16:39:41 -0700 (MST) Received: by dev-csander.dev.purestorage.com (Postfix, from userid 1557716354) id 3DB1AE41EF2; Tue, 25 Nov 2025 16:39:41 -0700 (MST) From: Caleb Sander Mateos To: Jens Axboe Cc: io-uring@vger.kernel.org, linux-kernel@vger.kernel.org, Caleb Sander Mateos Subject: [PATCH v3 4/4] io_uring: avoid uring_lock for IORING_SETUP_SINGLE_ISSUER Date: Tue, 25 Nov 2025 16:39:28 -0700 Message-ID: <20251125233928.3962947-5-csander@purestorage.com> X-Mailer: git-send-email 2.45.2 In-Reply-To: <20251125233928.3962947-1-csander@purestorage.com> References: <20251125233928.3962947-1-csander@purestorage.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" io_ring_ctx's mutex uring_lock can be quite expensive in high-IOPS workloads. Even when only one thread pinned to a single CPU is accessing the io_ring_ctx, the atomic CASes required to lock and unlock the mutex are very hot instructions. The mutex's primary purpose is to prevent concurrent io_uring system calls on the same io_ring_ctx. However, there is already a flag IORING_SETUP_SINGLE_ISSUER that promises only one task will make io_uring_enter() and io_uring_register() system calls on the io_ring_ctx once it's enabled. So if the io_ring_ctx is setup with IORING_SETUP_SINGLE_ISSUER, skip the uring_lock mutex_lock() and mutex_unlock() on the submitter_task. On other tasks acquiring the ctx uring lock, use a task work item to suspend the submitter_task for the critical section. In io_uring_register(), continue to always acquire the uring_lock mutex. io_uring_register() can be called on a disabled io_ring_ctx (indeed, it's required to enable it), when submitter_task isn't set yet. After submitter_task is set, io_uring_register() is only permitted on submitter_task, so uring_lock suffices to exclude all other users. Signed-off-by: Caleb Sander Mateos --- io_uring/io_uring.c | 11 +++++ io_uring/io_uring.h | 101 ++++++++++++++++++++++++++++++++++++++++++-- 2 files changed, 109 insertions(+), 3 deletions(-) diff --git a/io_uring/io_uring.c b/io_uring/io_uring.c index e05e56a840f9..64e4e57e2c11 100644 --- a/io_uring/io_uring.c +++ b/io_uring/io_uring.c @@ -363,10 +363,21 @@ static __cold struct io_ring_ctx *io_ring_ctx_alloc(s= truct io_uring_params *p) xa_destroy(&ctx->io_bl_xa); kfree(ctx); return NULL; } =20 +void io_ring_suspend_work(struct callback_head *cb_head) +{ + struct io_ring_suspend_work *suspend_work =3D + container_of(cb_head, struct io_ring_suspend_work, cb_head); + DECLARE_COMPLETION_ONSTACK(suspend_end); + + suspend_work->lock_state->suspend_end =3D &suspend_end; + complete(&suspend_work->suspend_start); + wait_for_completion(&suspend_end); +} + static void io_clean_op(struct io_kiocb *req) { if (unlikely(req->flags & REQ_F_BUFFER_SELECTED)) io_kbuf_drop_legacy(req); =20 diff --git a/io_uring/io_uring.h b/io_uring/io_uring.h index 23dae0af530b..262971224cc6 100644 --- a/io_uring/io_uring.h +++ b/io_uring/io_uring.h @@ -1,8 +1,9 @@ #ifndef IOU_CORE_H #define IOU_CORE_H =20 +#include #include #include #include #include #include @@ -195,36 +196,130 @@ void io_queue_next(struct io_kiocb *req); void io_task_refs_refill(struct io_uring_task *tctx); bool __io_alloc_req_refill(struct io_ring_ctx *ctx); =20 void io_activate_pollwq(struct io_ring_ctx *ctx); =20 +/* + * The ctx uring lock protects most of the mutable struct io_ring_ctx state + * accessed in the struct io_kiocb issue path. In the I/O path, it is typi= cally + * acquired in the io_uring_enter() syscall and io_handle_tw_list(). For + * IORING_SETUP_SQPOLL, it's acquired by io_sq_thread() instead. io_kiocb's + * issued with IO_URING_F_UNLOCKED in issue_flags (e.g. by io_wq_submit_wo= rk()) + * acquire and release the ctx uring lock whenever they must touch io_ring= _ctx + * state. io_uring_register() also acquires the ctx uring lock because most + * opcodes mutate io_ring_ctx state accessed in the issue path. + * + * For !IORING_SETUP_SINGLE_ISSUER io_ring_ctx's, acquiring the ctx uring = lock + * is always done via mutex_(try)lock(&ctx->uring_lock). + * + * However, for IORING_SETUP_SINGLE_ISSUER, we can avoid the mutex_lock() + + * mutex_unlock() overhead on submitter_task because a single thread can't= race + * with itself. In the uncommon case where the ctx uring lock is needed on + * another thread, it must suspend submitter_task by scheduling a task wor= k item + * on it. io_ring_ctx_lock() returns once the task work item has started. + * submitter_task is unblocked once io_ring_ctx_unlock() is called. + * + * io_uring_register() requires special treatment for IORING_SETUP_SINGLE_= ISSUER + * since it's allowed on a IORING_SETUP_R_DISABLED io_ring_ctx, where + * submitter_task isn't set yet. Hence the io_ring_register_ctx_*() family + * of helpers. They unconditionally acquire the uring_lock mutex, which al= ways + * works to exclude other ctx uring lock users: + * - For !IORING_SETUP_SINGLE_ISSUER, all users acquire the ctx uring lock= via + * the uring_lock mutex + * - For IORING_SETUP_SINGLE_ISSUER and IORING_SETUP_R_DISABLED, only + * io_uring_register() is allowed before the io_ring_ctx is enabled. + * So again, all ctx uring lock users acquire the uring_lock mutex. + * - For IORING_SETUP_SINGLE_ISSUER and !IORING_SETUP_R_DISABLED, + * io_uring_register() is only permitted on submitter_task, which is alw= ays + * granted the ctx uring lock unless suspended. + * Acquiring the uring_lock mutex is unnecessary but still correct. + */ + struct io_ring_ctx_lock_state { + struct completion *suspend_end; }; =20 +struct io_ring_suspend_work { + struct callback_head cb_head; + struct completion suspend_start; + struct io_ring_ctx_lock_state *lock_state; +}; + +void io_ring_suspend_work(struct callback_head *cb_head); + /* Acquire the ctx uring lock */ static inline void io_ring_ctx_lock(struct io_ring_ctx *ctx, struct io_ring_ctx_lock_state *state) { - mutex_lock(&ctx->uring_lock); + struct io_ring_suspend_work suspend_work; + struct task_struct *submitter_task; + + if (!(ctx->flags & IORING_SETUP_SINGLE_ISSUER)) { + mutex_lock(&ctx->uring_lock); + return; + } + + submitter_task =3D ctx->submitter_task; + /* + * Not suitable for use while IORING_SETUP_R_DISABLED. + * Must use io_ring_register_ctx_lock() in that case. + */ + WARN_ON_ONCE(!submitter_task); + if (likely(current =3D=3D submitter_task)) + return; + + /* Use task work to suspend submitter_task */ + init_task_work(&suspend_work.cb_head, io_ring_suspend_work); + init_completion(&suspend_work.suspend_start); + suspend_work.lock_state =3D state; + /* If task_work_add() fails, task is exiting, so no need to suspend */ + if (unlikely(task_work_add(submitter_task, &suspend_work.cb_head, + TWA_SIGNAL))) { + state->suspend_end =3D NULL; + return; + } + + wait_for_completion(&suspend_work.suspend_start); } =20 /* Attempt to acquire the ctx uring lock without blocking */ static inline bool io_ring_ctx_trylock(struct io_ring_ctx *ctx) { - return mutex_trylock(&ctx->uring_lock); + if (!(ctx->flags & IORING_SETUP_SINGLE_ISSUER)) + return mutex_trylock(&ctx->uring_lock); + + /* Not suitable for use while IORING_SETUP_R_DISABLED */ + WARN_ON_ONCE(!ctx->submitter_task); + return current =3D=3D ctx->submitter_task; } =20 /* Release the ctx uring lock */ static inline void io_ring_ctx_unlock(struct io_ring_ctx *ctx, struct io_ring_ctx_lock_state *state) { - mutex_unlock(&ctx->uring_lock); + if (!(ctx->flags & IORING_SETUP_SINGLE_ISSUER)) { + mutex_unlock(&ctx->uring_lock); + return; + } + + if (likely(current =3D=3D ctx->submitter_task)) + return; + + if (likely(state->suspend_end)) + complete(state->suspend_end); } =20 /* Assert (if CONFIG_LOCKDEP) that the ctx uring lock is held */ static inline void io_ring_ctx_assert_locked(const struct io_ring_ctx *ctx) { + /* + * No straightforward way to check that submitter_task is suspended + * without access to struct io_ring_ctx_lock_state + */ + if (ctx->flags & IORING_SETUP_SINGLE_ISSUER) + return; + lockdep_assert_held(&ctx->uring_lock); } =20 /* Acquire the ctx uring lock during the io_uring_register() syscall */ static inline void io_ring_register_ctx_lock(struct io_ring_ctx *ctx) --=20 2.45.2