From nobody Sun Nov 16 04:05:47 2025 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=quarantine dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1743517932; cv=none; d=zohomail.com; s=zohoarc; b=SCDitrLMDj6thJTiLhkxCkaeEfILy1RYEzKtTMu5BKdycc3PJTimvVqdEyoPJfGPpIZz17JG0YVimaxNcljeF2cce+66/GBuNR6buNnx1HEdFht2GZl6U2GllM2E8mCUOWlG6k1otKDFE/efgFgA+Q6ERggsQacTPbCJOYwA0w8= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1743517932; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=6CAouPoxIin2lsECIDIbkUGDCD+ZxPN1iRQDMmNKmNk=; b=dvrft1JwBH2sdhb0pbQ7xvntjmUFgszYKmGRGki3eH9aoRibKxVZ395iW7WX7VRYYPMhJzRpLlgaUyT3hBKZo/B3NaFEtadZM60fAzdBKK6g61XfYM0mBsqUErl5KgIqOaSZ35J5pTVcpeGHPJUZEz/8fiYx/JwbaYX7P5OuIlU= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=quarantine dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1743517932667454.7917165019054; Tue, 1 Apr 2025 07:32:12 -0700 (PDT) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1tzcdv-0006nY-PN; Tue, 01 Apr 2025 10:31:07 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tzcdu-0006mG-Ba for qemu-devel@nongnu.org; Tue, 01 Apr 2025 10:31:06 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tzcds-0005w1-N6 for qemu-devel@nongnu.org; Tue, 01 Apr 2025 10:31:06 -0400 Received: from mx-prod-mc-08.mail-002.prod.us-west-2.aws.redhat.com (ec2-35-165-154-97.us-west-2.compute.amazonaws.com [35.165.154.97]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-637-_9WJQ_xJO1OmcHme8BdZpQ-1; Tue, 01 Apr 2025 10:30:59 -0400 Received: from mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.111]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-08.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id A52EE180AF4C; Tue, 1 Apr 2025 14:30:58 +0000 (UTC) Received: from localhost (unknown [10.2.16.38]) by mx-prod-int-08.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id E50601801747; Tue, 1 Apr 2025 14:30:57 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1743517863; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=6CAouPoxIin2lsECIDIbkUGDCD+ZxPN1iRQDMmNKmNk=; b=Ei6feG4nGMO4rbUr0a7Kk3/I4o6p7hyCm5ztkE82h6VnBBhgs9w5wDVNC6BwdBvYNfkLVR P8N8pvoMCrG6pxAx4E+y7J2Stlt+EmgZmBftT1y0LA1xfEgNrCEhvxzDEQATEQ6TxRC9pj BGDfesxqGmipHnw9fXTeSWK51L9MaL8= X-MC-Unique: _9WJQ_xJO1OmcHme8BdZpQ-1 X-Mimecast-MFC-AGG-ID: _9WJQ_xJO1OmcHme8BdZpQ_1743517858 From: Stefan Hajnoczi To: qemu-devel@nongnu.org Cc: Aarushi Mehta , Stefan Hajnoczi , Stefano Garzarella , surajshirvankar@gmail.com, Hanna Reitz , qemu-block@nongnu.org, Kevin Wolf , Paolo Bonzini , Fam Zheng Subject: [PATCH 1/3] aio-posix: treat io_uring setup failure as fatal Date: Tue, 1 Apr 2025 10:27:19 -0400 Message-ID: <20250401142721.280287-2-stefanha@redhat.com> In-Reply-To: <20250401142721.280287-1-stefanha@redhat.com> References: <20250401142721.280287-1-stefanha@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.111 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.129.124; envelope-from=stefanha@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -30 X-Spam_score: -3.1 X-Spam_bar: --- X-Spam_report: (-3.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.997, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H5=0.001, RCVD_IN_MSPIKE_WL=0.001, RCVD_IN_VALIDITY_CERTIFIED_BLOCKED=0.001, RCVD_IN_VALIDITY_RPBL_BLOCKED=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1743517935364019100 Content-Type: text/plain; charset="utf-8" In the early days of io_uring it was possible for io_uring_setup(2) to fail due to exhausting RLIMIT_MEMLOCK. QEMU's solution was to fall back to epoll(7) or ppoll(2) when io_uring could not be used in an AioContext. Nowadays io_uring memory is accounted differently so io_uring_setup(2) won't fail. Treat failure as a fatal error. Keep it simple: io_uring is available if and only if CONFIG_LINUX_IO_URING is defined. Upcoming features that rely on io_uring won't need to handle the case where a subset of AioContexts lacks io_uring. This will simplify the aio_add_sqe() API introduced in the next commit. Signed-off-by: Stefan Hajnoczi --- util/fdmon-io_uring.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/util/fdmon-io_uring.c b/util/fdmon-io_uring.c index 2092d08d24..18b33a0370 100644 --- a/util/fdmon-io_uring.c +++ b/util/fdmon-io_uring.c @@ -45,6 +45,7 @@ =20 #include "qemu/osdep.h" #include +#include "qemu/error-report.h" #include "qemu/rcu_queue.h" #include "aio-posix.h" =20 @@ -369,7 +370,8 @@ bool fdmon_io_uring_setup(AioContext *ctx) =20 ret =3D io_uring_queue_init(FDMON_IO_URING_ENTRIES, &ctx->fdmon_io_uri= ng, 0); if (ret !=3D 0) { - return false; + error_report("failed to initialize io_uring: %s", strerror(-ret)); + exit(EXIT_FAILURE); } =20 QSLIST_INIT(&ctx->submit_list); --=20 2.49.0 From nobody Sun Nov 16 04:05:47 2025 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=quarantine dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1743517970; cv=none; d=zohomail.com; s=zohoarc; b=IFlXpfjfdxG4pViu2t+9VOefGaPShi04csCHOcftRV6KeHza8soj0nPv7O2TTvwhCZphv+wMCz8+dC6Eet6xUjMV5t/vQ0qaM7GvTeB+QhJMcLBuaN6AIgZRfFjP8hRTLNbZ/bwdNs44Q3xXL5jI01JiILaulTns7kS+CqhDFLI= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1743517970; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=B6mPebaynhuuOFMHo/lh7KcN9Epz+ODw0XfaG/YSz5E=; b=Wj9f8UhqAtUhTxT7e6wmCVHxKu1IKUps7CABcRS/7yG/JA4v3QoN216bqJzXrhnsNEfIgvYS4PWgnR/xOqsHjV8+mAfNaHxi8JtbmU5U5Jn7K69i9NE/7A2iokoJAHEv6arFAOnWBH3CiYKN64pL/4ZIui8r9aZoyJn2n/ecij4= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=quarantine dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1743517970430549.1730447497198; Tue, 1 Apr 2025 07:32:50 -0700 (PDT) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1tzce1-0006pd-PT; Tue, 01 Apr 2025 10:31:13 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tzcdz-0006oc-Dl for qemu-devel@nongnu.org; Tue, 01 Apr 2025 10:31:12 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tzcdv-0005wZ-Qu for qemu-devel@nongnu.org; Tue, 01 Apr 2025 10:31:11 -0400 Received: from mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-365-OUnTbxghM2W96kXkux55eA-1; Tue, 01 Apr 2025 10:31:04 -0400 Received: from mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.93]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 1093A1956060; Tue, 1 Apr 2025 14:31:02 +0000 (UTC) Received: from localhost (unknown [10.2.16.38]) by mx-prod-int-06.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 14FA9180B488; Tue, 1 Apr 2025 14:30:59 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1743517867; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=B6mPebaynhuuOFMHo/lh7KcN9Epz+ODw0XfaG/YSz5E=; b=YuKPSHBYHuTc2PQ7N80JffKtqY7qyKdgeqd84setR9e9GUzOKtLJPUk4qIauXjy4l8+Gvs IEzsEXY7RBO7gxyZITzxRct4DZx3H1JKlCrqiISbUB3FokEDmssI/96tcQLIKY/l39rgi2 xniba4g6pPutSsOdQZtXd03+g43X7EI= X-MC-Unique: OUnTbxghM2W96kXkux55eA-1 X-Mimecast-MFC-AGG-ID: OUnTbxghM2W96kXkux55eA_1743517862 From: Stefan Hajnoczi To: qemu-devel@nongnu.org Cc: Aarushi Mehta , Stefan Hajnoczi , Stefano Garzarella , surajshirvankar@gmail.com, Hanna Reitz , qemu-block@nongnu.org, Kevin Wolf , Paolo Bonzini , Fam Zheng Subject: [PATCH 2/3] aio-posix: add aio_add_sqe() API for user-defined io_uring requests Date: Tue, 1 Apr 2025 10:27:20 -0400 Message-ID: <20250401142721.280287-3-stefanha@redhat.com> In-Reply-To: <20250401142721.280287-1-stefanha@redhat.com> References: <20250401142721.280287-1-stefanha@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.30.177.93 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.129.124; envelope-from=stefanha@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -30 X-Spam_score: -3.1 X-Spam_bar: --- X-Spam_report: (-3.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.997, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H5=0.001, RCVD_IN_MSPIKE_WL=0.001, RCVD_IN_VALIDITY_CERTIFIED_BLOCKED=0.001, RCVD_IN_VALIDITY_RPBL_BLOCKED=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=unavailable autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1743517974079019000 Content-Type: text/plain; charset="utf-8" Introduce the aio_add_sqe() API for submitting io_uring requests in the current AioContext. This allows other components in QEMU, like the block layer, to take advantage of io_uring features without creating their own io_uring context. This API supports nested event loops just like file descriptor monitoring and BHs do. This comes at a complexity cost: a BH is required to dispatch CQE callbacks and they are placed on a list so that a nested event loop can invoke its parent's pending CQE callbacks. If you're wondering why CqeHandler exists instead of just a callback function pointer, this is why. Signed-off-by: Stefan Hajnoczi --- include/block/aio.h | 67 +++++++++++++++++++ util/aio-posix.h | 1 + util/aio-posix.c | 9 +++ util/fdmon-io_uring.c | 145 +++++++++++++++++++++++++++++++----------- 4 files changed, 185 insertions(+), 37 deletions(-) diff --git a/include/block/aio.h b/include/block/aio.h index 1657740a0e..4dfb419a21 100644 --- a/include/block/aio.h +++ b/include/block/aio.h @@ -61,6 +61,27 @@ typedef struct LuringState LuringState; /* Is polling disabled? */ bool aio_poll_disabled(AioContext *ctx); =20 +#ifdef CONFIG_LINUX_IO_URING +/* + * Each io_uring request must have a unique CqeHandler that processes the = cqe. + * The lifetime of a CqeHandler must be at least from aio_add_sqe() until + * ->cb() invocation. + */ +typedef struct CqeHandler CqeHandler; +struct CqeHandler { + /* Called by the AioContext when the request has completed */ + void (*cb)(CqeHandler *handler); + + /* Used internally, do not access this */ + QSIMPLEQ_ENTRY(CqeHandler) next; + + /* This field is filled in before ->cb() is called */ + struct io_uring_cqe cqe; +}; + +typedef QSIMPLEQ_HEAD(, CqeHandler) CqeHandlerSimpleQ; +#endif /* CONFIG_LINUX_IO_URING */ + /* Callbacks for file descriptor monitoring implementations */ typedef struct { /* @@ -138,6 +159,27 @@ typedef struct { * Called with list_lock incremented. */ void (*gsource_dispatch)(AioContext *ctx, AioHandlerList *ready_list); + +#ifdef CONFIG_LINUX_IO_URING + /** + * aio_add_sqe: Add an io_uring sqe for submission. + * @prep_sqe: invoked with an sqe that should be prepared for submissi= on + * @opaque: user-defined argument to @prep_sqe() + * @cqe_handler: the unique cqe handler associated with this request + * + * The caller's @prep_sqe() function is invoked to fill in the details= of + * the sqe. Do not call io_uring_sqe_set_data() on this sqe. + * + * The kernel may see the sqe as soon as @pre_sqe() returns or it may = take + * until the next event loop iteration. + * + * This function is called from the current AioContext and is not + * thread-safe. + */ + void (*add_sqe)(AioContext *ctx, + void (*prep_sqe)(struct io_uring_sqe *sqe, void *opaqu= e), + void *opaque, CqeHandler *cqe_handler); +#endif /* CONFIG_LINUX_IO_URING */ } FDMonOps; =20 /* @@ -255,6 +297,10 @@ struct AioContext { struct io_uring fdmon_io_uring; AioHandlerSList submit_list; gpointer io_uring_fd_tag; + + /* Pending callback state for cqe handlers */ + CqeHandlerSimpleQ cqe_handler_ready_list; + QEMUBH *cqe_handler_bh; #endif =20 /* TimerLists for calling timers - one per clock type. Has its own @@ -370,6 +416,27 @@ QEMUBH *aio_bh_new_full(AioContext *ctx, QEMUBHFunc *c= b, void *opaque, #define aio_bh_new_guarded(ctx, cb, opaque, guard) \ aio_bh_new_full((ctx), (cb), (opaque), (stringify(cb)), guard) =20 +#ifdef CONFIG_LINUX_IO_URING +/** + * aio_add_sqe: Add an io_uring sqe for submission. + * @prep_sqe: invoked with an sqe that should be prepared for submission + * @opaque: user-defined argument to @prep_sqe() + * @cqe_handler: the unique cqe handler associated with this request + * + * The caller's @prep_sqe() function is invoked to fill in the details of = the + * sqe. Do not call io_uring_sqe_set_data() on this sqe. + * + * The sqe is submitted by the current AioContext. The kernel may see the = sqe + * as soon as @pre_sqe() returns or it may take until the next event loop + * iteration. + * + * When the AioContext is destroyed, pending sqes are ignored and their + * CqeHandlers are not invoked. + */ +void aio_add_sqe(void (*prep_sqe)(struct io_uring_sqe *sqe, void *opaque), + void *opaque, CqeHandler *cqe_handler); +#endif /* CONFIG_LINUX_IO_URING */ + /** * aio_notify: Force processing of pending events. * diff --git a/util/aio-posix.h b/util/aio-posix.h index f9994ed79e..d3e2f66957 100644 --- a/util/aio-posix.h +++ b/util/aio-posix.h @@ -35,6 +35,7 @@ struct AioHandler { #ifdef CONFIG_LINUX_IO_URING QSLIST_ENTRY(AioHandler) node_submitted; unsigned flags; /* see fdmon-io_uring.c */ + CqeHandler cqe_handler; #endif int64_t poll_idle_timeout; /* when to stop userspace polling */ bool poll_ready; /* has polling detected an event? */ diff --git a/util/aio-posix.c b/util/aio-posix.c index 6c2ee0b0b4..f2535dc868 100644 --- a/util/aio-posix.c +++ b/util/aio-posix.c @@ -767,3 +767,12 @@ void aio_context_set_aio_params(AioContext *ctx, int64= _t max_batch) =20 aio_notify(ctx); } + +#ifdef CONFIG_LINUX_IO_URING +void aio_add_sqe(void (*prep_sqe)(struct io_uring_sqe *sqe, void *opaque), + void *opaque, CqeHandler *cqe_handler) +{ + AioContext *ctx =3D qemu_get_current_aio_context(); + ctx->fdmon_ops->add_sqe(ctx, prep_sqe, opaque, cqe_handler); +} +#endif /* CONFIG_LINUX_IO_URING */ diff --git a/util/fdmon-io_uring.c b/util/fdmon-io_uring.c index 18b33a0370..a4523e3dcc 100644 --- a/util/fdmon-io_uring.c +++ b/util/fdmon-io_uring.c @@ -75,8 +75,8 @@ static inline int pfd_events_from_poll(int poll_events) } =20 /* - * Returns an sqe for submitting a request. Only be called within - * fdmon_io_uring_wait(). + * Returns an sqe for submitting a request. Only called from the AioContext + * thread. */ static struct io_uring_sqe *get_sqe(AioContext *ctx) { @@ -166,23 +166,43 @@ static void fdmon_io_uring_update(AioContext *ctx, } } =20 +static void fdmon_io_uring_add_sqe(AioContext *ctx, + void (*prep_sqe)(struct io_uring_sqe *sqe, void *opaque), + void *opaque, CqeHandler *cqe_handler) +{ + struct io_uring_sqe *sqe =3D get_sqe(ctx); + + prep_sqe(sqe, opaque); + io_uring_sqe_set_data(sqe, cqe_handler); +} + +static void fdmon_special_cqe_handler(CqeHandler *cqe_handler) +{ + /* + * This is an empty function that is never called. It is used as a fun= ction + * pointer to distinguish it from ordinary cqe handlers. + */ +} + static void add_poll_multishot_sqe(AioContext *ctx, AioHandler *node) { struct io_uring_sqe *sqe =3D get_sqe(ctx); int events =3D poll_events_from_pfd(node->pfd.events); =20 io_uring_prep_poll_multishot(sqe, node->pfd.fd, events); - io_uring_sqe_set_data(sqe, node); + node->cqe_handler.cb =3D fdmon_special_cqe_handler; + io_uring_sqe_set_data(sqe, &node->cqe_handler); } =20 static void add_poll_remove_sqe(AioContext *ctx, AioHandler *node) { struct io_uring_sqe *sqe =3D get_sqe(ctx); + CqeHandler *cqe_handler =3D &node->cqe_handler; =20 #ifdef LIBURING_HAVE_DATA64 - io_uring_prep_poll_remove(sqe, (uintptr_t)node); + io_uring_prep_poll_remove(sqe, (uintptr_t)cqe_handler); #else - io_uring_prep_poll_remove(sqe, node); + io_uring_prep_poll_remove(sqe, cqe_handler); #endif io_uring_sqe_set_data(sqe, NULL); } @@ -221,20 +241,12 @@ static void fill_sq_ring(AioContext *ctx) } } =20 -/* Returns true if a handler became ready */ -static bool process_cqe(AioContext *ctx, - AioHandlerList *ready_list, - struct io_uring_cqe *cqe) +static bool process_cqe_aio_handler(AioContext *ctx, + AioHandlerList *ready_list, + AioHandler *node, + struct io_uring_cqe *cqe) { - AioHandler *node =3D io_uring_cqe_get_data(cqe); - unsigned flags; - - /* poll_timeout and poll_remove have a zero user_data field */ - if (!node) { - return false; - } - - flags =3D qatomic_read(&node->flags); + unsigned flags =3D qatomic_read(&node->flags); =20 /* * poll_multishot cancelled by poll_remove? Or completed early because= fd @@ -261,6 +273,56 @@ static bool process_cqe(AioContext *ctx, return true; } =20 +/* Process CqeHandlers from the ready list */ +static void cqe_handler_bh(void *opaque) +{ + AioContext *ctx =3D opaque; + CqeHandlerSimpleQ *ready_list =3D &ctx->cqe_handler_ready_list; + + /* + * If cqe_handler->cb() calls aio_poll() it must continue processing + * ready_list. Schedule a BH so the inner event loop calls us again. + */ + qemu_bh_schedule(ctx->cqe_handler_bh); + + while (!QSIMPLEQ_EMPTY(ready_list)) { + CqeHandler *cqe_handler =3D QSIMPLEQ_FIRST(ready_list); + + QSIMPLEQ_REMOVE_HEAD(ready_list, next); + + cqe_handler->cb(cqe_handler); + } + + qemu_bh_cancel(ctx->cqe_handler_bh); +} + +/* Returns true if a handler became ready */ +static bool process_cqe(AioContext *ctx, + AioHandlerList *ready_list, + struct io_uring_cqe *cqe) +{ + CqeHandler *cqe_handler =3D io_uring_cqe_get_data(cqe); + + /* poll_timeout and poll_remove have a zero user_data field */ + if (!cqe_handler) { + return false; + } + + /* + * Special handling for AioHandler cqes. They need ready_list and have= a + * return value. + */ + if (cqe_handler->cb =3D=3D fdmon_special_cqe_handler) { + AioHandler *node =3D container_of(cqe_handler, AioHandler, cqe_han= dler); + return process_cqe_aio_handler(ctx, ready_list, node, cqe); + } + + cqe_handler->cqe =3D *cqe; + QSIMPLEQ_INSERT_TAIL(&ctx->cqe_handler_ready_list, cqe_handler, next); + qemu_bh_schedule(ctx->cqe_handler_bh); + return false; +} + static int process_cq_ring(AioContext *ctx, AioHandlerList *ready_list) { struct io_uring *ring =3D &ctx->fdmon_io_uring; @@ -360,6 +422,7 @@ static const FDMonOps fdmon_io_uring_ops =3D { .gsource_prepare =3D fdmon_io_uring_gsource_prepare, .gsource_check =3D fdmon_io_uring_gsource_check, .gsource_dispatch =3D fdmon_io_uring_gsource_dispatch, + .add_sqe =3D fdmon_io_uring_add_sqe, }; =20 bool fdmon_io_uring_setup(AioContext *ctx) @@ -375,6 +438,8 @@ bool fdmon_io_uring_setup(AioContext *ctx) } =20 QSLIST_INIT(&ctx->submit_list); + QSIMPLEQ_INIT(&ctx->cqe_handler_ready_list); + ctx->cqe_handler_bh =3D aio_bh_new(ctx, cqe_handler_bh, ctx); ctx->fdmon_ops =3D &fdmon_io_uring_ops; ctx->io_uring_fd_tag =3D g_source_add_unix_fd(&ctx->source, ctx->fdmon_io_uring.ring_fd, G_IO_IN); @@ -384,30 +449,36 @@ bool fdmon_io_uring_setup(AioContext *ctx) =20 void fdmon_io_uring_destroy(AioContext *ctx) { - if (ctx->fdmon_ops =3D=3D &fdmon_io_uring_ops) { - AioHandler *node; + AioHandler *node; =20 - io_uring_queue_exit(&ctx->fdmon_io_uring); + if (ctx->fdmon_ops !=3D &fdmon_io_uring_ops) { + return; + } =20 - /* Move handlers due to be removed onto the deleted list */ - while ((node =3D QSLIST_FIRST_RCU(&ctx->submit_list))) { - unsigned flags =3D qatomic_fetch_and(&node->flags, - ~(FDMON_IO_URING_PENDING | - FDMON_IO_URING_ADD | - FDMON_IO_URING_REMOVE)); + io_uring_queue_exit(&ctx->fdmon_io_uring); =20 - if (flags & FDMON_IO_URING_REMOVE) { - QLIST_INSERT_HEAD_RCU(&ctx->deleted_aio_handlers, node, no= de_deleted); - } + /* Move handlers due to be removed onto the deleted list */ + while ((node =3D QSLIST_FIRST_RCU(&ctx->submit_list))) { + unsigned flags =3D qatomic_fetch_and(&node->flags, + ~(FDMON_IO_URING_PENDING | + FDMON_IO_URING_ADD | + FDMON_IO_URING_REMOVE)); =20 - QSLIST_REMOVE_HEAD_RCU(&ctx->submit_list, node_submitted); + if (flags & FDMON_IO_URING_REMOVE) { + QLIST_INSERT_HEAD_RCU(&ctx->deleted_aio_handlers, + node, node_deleted); } =20 - g_source_remove_unix_fd(&ctx->source, ctx->io_uring_fd_tag); - ctx->io_uring_fd_tag =3D NULL; - - qemu_lockcnt_lock(&ctx->list_lock); - fdmon_poll_downgrade(ctx); - qemu_lockcnt_unlock(&ctx->list_lock); + QSLIST_REMOVE_HEAD_RCU(&ctx->submit_list, node_submitted); } + + g_source_remove_unix_fd(&ctx->source, ctx->io_uring_fd_tag); + ctx->io_uring_fd_tag =3D NULL; + + assert(QSIMPLEQ_EMPTY(&ctx->cqe_handler_ready_list)); + qemu_bh_delete(ctx->cqe_handler_bh); + + qemu_lockcnt_lock(&ctx->list_lock); + fdmon_poll_downgrade(ctx); + qemu_lockcnt_unlock(&ctx->list_lock); } --=20 2.49.0 From nobody Sun Nov 16 04:05:47 2025 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=quarantine dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1743517922; cv=none; d=zohomail.com; s=zohoarc; b=Iygn/uhsJ4sDK00YosvimMQfbSu3Cl0O0kzNwWNj3xBElIdQr54Py/iSZfG/j6fhRB6a19xLyTp43iPMQclK94/5ZFcrz0CQOjSTzvtZ4N2Ms2htlf4fPLLAI2FSpl6vTlJs2L05SCKffsItGNVLKYVyuKREPp/U1hajxakdM7U= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1743517922; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=I0vl8zMUEzEVOc36H6FmXNXdOAen7ZSgpKfoLAV4OjQ=; b=PQz08wqMuid0ecLdXRyYuCarNMrwlsQlGJoKo434XSfrhUeJiSTCRHTSGYW532lbb1nF9TSYEHpPZuVIFc048mA1bLweoxoG7pFIgZJ3go5E1JldvhX3zwXcYmyTaFnbwo/WhuZZ1dJe3nrBVQtVrCWqGWhmJtpX1wwfHMfkpbA= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=quarantine dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1743517922320723.5812185868926; Tue, 1 Apr 2025 07:32:02 -0700 (PDT) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1tzceI-0006t4-VK; Tue, 01 Apr 2025 10:31:32 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tzceD-0006rq-Q5 for qemu-devel@nongnu.org; Tue, 01 Apr 2025 10:31:26 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1tzce9-0005xi-EU for qemu-devel@nongnu.org; Tue, 01 Apr 2025 10:31:25 -0400 Received: from mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-589-67E_5CmcPmeVqSJU7SEF-g-1; Tue, 01 Apr 2025 10:31:13 -0400 Received: from mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com (mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.40]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mx-prod-mc-05.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS id 913C21954B16; Tue, 1 Apr 2025 14:31:07 +0000 (UTC) Received: from localhost (unknown [10.2.16.38]) by mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP id 7137D1955BF2; Tue, 1 Apr 2025 14:31:03 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1743517880; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=I0vl8zMUEzEVOc36H6FmXNXdOAen7ZSgpKfoLAV4OjQ=; b=ZAtIIRsDtjpzQsoJeEhLn1FeaCRSPCodiD/33U+aZHAhxZNHs2mWoib5s4ByJOsyTX6WGh cWNpyD1syJ2H3kKkSWPTuLFmbt+R0q4xGI45JjWXvi4qQKzCur5ND6wkiI1SQIGVsT6PWU QJwHVvA7Ju5ik4lIp5rJLK7Zt5lu7a0= X-MC-Unique: 67E_5CmcPmeVqSJU7SEF-g-1 X-Mimecast-MFC-AGG-ID: 67E_5CmcPmeVqSJU7SEF-g_1743517868 From: Stefan Hajnoczi To: qemu-devel@nongnu.org Cc: Aarushi Mehta , Stefan Hajnoczi , Stefano Garzarella , surajshirvankar@gmail.com, Hanna Reitz , qemu-block@nongnu.org, Kevin Wolf , Paolo Bonzini , Fam Zheng Subject: [PATCH 3/3] block/io_uring: use aio_add_sqe() Date: Tue, 1 Apr 2025 10:27:21 -0400 Message-ID: <20250401142721.280287-4-stefanha@redhat.com> In-Reply-To: <20250401142721.280287-1-stefanha@redhat.com> References: <20250401142721.280287-1-stefanha@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.0 on 10.30.177.40 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.129.124; envelope-from=stefanha@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -30 X-Spam_score: -3.1 X-Spam_bar: --- X-Spam_report: (-3.1 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.997, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H5=0.001, RCVD_IN_MSPIKE_WL=0.001, RCVD_IN_VALIDITY_CERTIFIED_BLOCKED=0.001, RCVD_IN_VALIDITY_RPBL_BLOCKED=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1743517927065019000 Content-Type: text/plain; charset="utf-8" AioContext has its own io_uring instance for file descriptor monitoring. The disk I/O io_uring code was developed separately. Originally I thought the characteristics of file descriptor monitoring and disk I/O were too different, requiring separate io_uring instances. Now it has become clear to me that it's feasible to share a single io_uring instance for file descriptor monitoring and disk I/O. We're not using io_uring's IOPOLL feature or anything else that would require a separate instance. Unify block/io_uring.c and util/fdmon-io_uring.c using the new aio_add_sqe() API that allows user-defined io_uring sqe submission. Now block/io_uring.c just needs to submit readv/writev/fsync and most of the io_uring-specific logic is handled by fdmon-io_uring.c. There are two immediate advantages: 1. Fewer system calls. There is no need to monitor the disk I/O io_uring ring fd from the file descriptor monitoring io_uring instance. Disk I/O completions are now picked up directly. Also, sqes are accumulated in the sq ring until the end of the event loop iteration and there are fewer io_uring_enter(2) syscalls. 2. Less code duplication. Signed-off-by: Stefan Hajnoczi --- include/block/aio.h | 7 - include/block/raw-aio.h | 5 - block/file-posix.c | 25 +- block/io_uring.c | 489 ++++++++++------------------------------ stubs/io_uring.c | 32 --- util/async.c | 35 --- util/fdmon-io_uring.c | 6 + block/trace-events | 12 +- stubs/meson.build | 3 - util/trace-events | 4 + 10 files changed, 128 insertions(+), 490 deletions(-) delete mode 100644 stubs/io_uring.c diff --git a/include/block/aio.h b/include/block/aio.h index 4dfb419a21..b390b7bb60 100644 --- a/include/block/aio.h +++ b/include/block/aio.h @@ -291,8 +291,6 @@ struct AioContext { struct LinuxAioState *linux_aio; #endif #ifdef CONFIG_LINUX_IO_URING - LuringState *linux_io_uring; - /* State for file descriptor monitoring using Linux io_uring */ struct io_uring fdmon_io_uring; AioHandlerSList submit_list; @@ -615,11 +613,6 @@ struct LinuxAioState *aio_setup_linux_aio(AioContext *= ctx, Error **errp); /* Return the LinuxAioState bound to this AioContext */ struct LinuxAioState *aio_get_linux_aio(AioContext *ctx); =20 -/* Setup the LuringState bound to this AioContext */ -LuringState *aio_setup_linux_io_uring(AioContext *ctx, Error **errp); - -/* Return the LuringState bound to this AioContext */ -LuringState *aio_get_linux_io_uring(AioContext *ctx); /** * aio_timer_new_with_attrs: * @ctx: the aio context diff --git a/include/block/raw-aio.h b/include/block/raw-aio.h index 6570244496..30e5fc9a9f 100644 --- a/include/block/raw-aio.h +++ b/include/block/raw-aio.h @@ -74,15 +74,10 @@ static inline bool laio_has_fua(void) #endif /* io_uring.c - Linux io_uring implementation */ #ifdef CONFIG_LINUX_IO_URING -LuringState *luring_init(Error **errp); -void luring_cleanup(LuringState *s); - /* luring_co_submit: submit I/O requests in the thread's current AioContex= t. */ int coroutine_fn luring_co_submit(BlockDriverState *bs, int fd, uint64_t o= ffset, QEMUIOVector *qiov, int type, BdrvRequestFlags flags); -void luring_detach_aio_context(LuringState *s, AioContext *old_context); -void luring_attach_aio_context(LuringState *s, AioContext *new_context); bool luring_has_fua(void); #else static inline bool luring_has_fua(void) diff --git a/block/file-posix.c b/block/file-posix.c index 56d1972d15..b1b1d7a5dc 100644 --- a/block/file-posix.c +++ b/block/file-posix.c @@ -2442,27 +2442,6 @@ static bool bdrv_qiov_is_aligned(BlockDriverState *b= s, QEMUIOVector *qiov) return true; } =20 -#ifdef CONFIG_LINUX_IO_URING -static inline bool raw_check_linux_io_uring(BDRVRawState *s) -{ - Error *local_err =3D NULL; - AioContext *ctx; - - if (!s->use_linux_io_uring) { - return false; - } - - ctx =3D qemu_get_current_aio_context(); - if (unlikely(!aio_setup_linux_io_uring(ctx, &local_err))) { - error_reportf_err(local_err, "Unable to use linux io_uring, " - "falling back to thread pool: "); - s->use_linux_io_uring =3D false; - return false; - } - return true; -} -#endif - #ifdef CONFIG_LINUX_AIO static inline bool raw_check_linux_aio(BDRVRawState *s) { @@ -2515,7 +2494,7 @@ static int coroutine_fn raw_co_prw(BlockDriverState *= bs, int64_t *offset_ptr, if (s->needs_alignment && !bdrv_qiov_is_aligned(bs, qiov)) { type |=3D QEMU_AIO_MISALIGNED; #ifdef CONFIG_LINUX_IO_URING - } else if (raw_check_linux_io_uring(s)) { + } else if (s->use_linux_io_uring) { assert(qiov->size =3D=3D bytes); ret =3D luring_co_submit(bs, s->fd, offset, qiov, type, flags); goto out; @@ -2612,7 +2591,7 @@ static int coroutine_fn raw_co_flush_to_disk(BlockDri= verState *bs) }; =20 #ifdef CONFIG_LINUX_IO_URING - if (raw_check_linux_io_uring(s)) { + if (s->use_linux_io_uring) { return luring_co_submit(bs, s->fd, 0, NULL, QEMU_AIO_FLUSH, 0); } #endif diff --git a/block/io_uring.c b/block/io_uring.c index dd4f304910..dd930ee57e 100644 --- a/block/io_uring.c +++ b/block/io_uring.c @@ -11,28 +11,20 @@ #include "qemu/osdep.h" #include #include "block/aio.h" -#include "qemu/queue.h" #include "block/block.h" #include "block/raw-aio.h" #include "qemu/coroutine.h" -#include "qemu/defer-call.h" -#include "qapi/error.h" #include "system/block-backend.h" #include "trace.h" =20 -/* Only used for assertions. */ -#include "qemu/coroutine_int.h" - -/* io_uring ring size */ -#define MAX_ENTRIES 128 - -typedef struct LuringAIOCB { +typedef struct { Coroutine *co; - struct io_uring_sqe sqeq; - ssize_t ret; QEMUIOVector *qiov; - bool is_read; - QSIMPLEQ_ENTRY(LuringAIOCB) next; + uint64_t offset; + ssize_t ret; + int type; + int fd; + BdrvRequestFlags flags; =20 /* * Buffered reads may require resubmission, see @@ -40,36 +32,51 @@ typedef struct LuringAIOCB { */ int total_read; QEMUIOVector resubmit_qiov; -} LuringAIOCB; =20 -typedef struct LuringQueue { - unsigned int in_queue; - unsigned int in_flight; - bool blocked; - QSIMPLEQ_HEAD(, LuringAIOCB) submit_queue; -} LuringQueue; + CqeHandler cqe_handler; +} LuringRequest; =20 -struct LuringState { - AioContext *aio_context; - - struct io_uring ring; - - /* No locking required, only accessed from AioContext home thread */ - LuringQueue io_q; - - QEMUBH *completion_bh; -}; - -/** - * luring_resubmit: - * - * Resubmit a request by appending it to submit_queue. The caller must en= sure - * that ioq_submit() is called later so that submit_queue requests are sta= rted. - */ -static void luring_resubmit(LuringState *s, LuringAIOCB *luringcb) +static void luring_prep_sqe(struct io_uring_sqe *sqe, void *opaque) { - QSIMPLEQ_INSERT_TAIL(&s->io_q.submit_queue, luringcb, next); - s->io_q.in_queue++; + LuringRequest *req =3D opaque; + QEMUIOVector *qiov =3D req->qiov; + uint64_t offset =3D req->offset; + int fd =3D req->fd; + BdrvRequestFlags flags =3D req->flags; + + switch (req->type) { + case QEMU_AIO_WRITE: +#ifdef HAVE_IO_URING_PREP_WRITEV2 + { + int luring_flags =3D (flags & BDRV_REQ_FUA) ? RWF_DSYNC : 0; + io_uring_prep_writev2(sqe, fd, qiov->iov, + qiov->niov, offset, luring_flags); + } +#else + assert(flags =3D=3D 0); + io_uring_prep_writev(sqe, fd, qiov->iov, qiov->niov, offset); +#endif + break; + case QEMU_AIO_ZONE_APPEND: + io_uring_prep_writev(sqe, fd, qiov->iov, qiov->niov, offset); + break; + case QEMU_AIO_READ: + { + if (req->resubmit_qiov.iov !=3D NULL) { + qiov =3D &req->resubmit_qiov; + } + io_uring_prep_readv(sqe, fd, qiov->iov, qiov->niov, + offset + req->total_read); + break; + } + case QEMU_AIO_FLUSH: + io_uring_prep_fsync(sqe, fd, IORING_FSYNC_DATASYNC); + break; + default: + fprintf(stderr, "%s: invalid AIO request type, aborting 0x%x.\n", + __func__, req->type); + abort(); + } } =20 /** @@ -78,385 +85,115 @@ static void luring_resubmit(LuringState *s, LuringAIO= CB *luringcb) * Short reads are rare but may occur. The remaining read request needs to= be * resubmitted. */ -static void luring_resubmit_short_read(LuringState *s, LuringAIOCB *luring= cb, - int nread) +static void luring_resubmit_short_read(LuringRequest *req, int nread) { QEMUIOVector *resubmit_qiov; size_t remaining; =20 - trace_luring_resubmit_short_read(s, luringcb, nread); + trace_luring_resubmit_short_read(req, nread); =20 /* Update read position */ - luringcb->total_read +=3D nread; - remaining =3D luringcb->qiov->size - luringcb->total_read; + req->total_read +=3D nread; + remaining =3D req->qiov->size - req->total_read; =20 /* Shorten qiov */ - resubmit_qiov =3D &luringcb->resubmit_qiov; + resubmit_qiov =3D &req->resubmit_qiov; if (resubmit_qiov->iov =3D=3D NULL) { - qemu_iovec_init(resubmit_qiov, luringcb->qiov->niov); + qemu_iovec_init(resubmit_qiov, req->qiov->niov); } else { qemu_iovec_reset(resubmit_qiov); } - qemu_iovec_concat(resubmit_qiov, luringcb->qiov, luringcb->total_read, - remaining); + qemu_iovec_concat(resubmit_qiov, req->qiov, req->total_read, remaining= ); =20 - /* Update sqe */ - luringcb->sqeq.off +=3D nread; - luringcb->sqeq.addr =3D (uintptr_t)luringcb->resubmit_qiov.iov; - luringcb->sqeq.len =3D luringcb->resubmit_qiov.niov; - - luring_resubmit(s, luringcb); + aio_add_sqe(luring_prep_sqe, req, &req->cqe_handler); } =20 -/** - * luring_process_completions: - * @s: AIO state - * - * Fetches completed I/O requests, consumes cqes and invokes their callbac= ks - * The function is somewhat tricky because it supports nested event loops,= for - * example when a request callback invokes aio_poll(). - * - * Function schedules BH completion so it can be called again in a nested - * event loop. When there are no events left to complete the BH is being - * canceled. - * - */ -static void luring_process_completions(LuringState *s) +static void luring_cqe_handler(CqeHandler *cqe_handler) { - struct io_uring_cqe *cqes; - int total_bytes; + LuringRequest *req =3D container_of(cqe_handler, LuringRequest, cqe_ha= ndler); + int ret =3D cqe_handler->cqe.res; =20 - defer_call_begin(); + trace_luring_cqe_handler(req, ret); =20 - /* - * Request completion callbacks can run the nested event loop. - * Schedule ourselves so the nested event loop will "see" remaining - * completed requests and process them. Without this, completion - * callbacks that wait for other requests using a nested event loop - * would hang forever. - * - * This workaround is needed because io_uring uses poll_wait, which - * is woken up when new events are added to the uring, thus polling on - * the same uring fd will block unless more events are received. - * - * Other leaf block drivers (drivers that access the data themselves) - * are networking based, so they poll sockets for data and run the - * correct coroutine. - */ - qemu_bh_schedule(s->completion_bh); - - while (io_uring_peek_cqe(&s->ring, &cqes) =3D=3D 0) { - LuringAIOCB *luringcb; - int ret; - - if (!cqes) { - break; + if (ret < 0) { + /* + * Only writev/readv/fsync requests on regular files or host block + * devices are submitted. Therefore -EAGAIN is not expected but it= 's + * known to happen sometimes with Linux SCSI. Submit again and hope + * the request completes successfully. + * + * For more information, see: + * https://lore.kernel.org/io-uring/20210727165811.284510-3-axboe@= kernel.dk/T/#u + * + * If the code is changed to submit other types of requests in the + * future, then this workaround may need to be extended to deal wi= th + * genuine -EAGAIN results that should not be resubmitted + * immediately. + */ + if (ret =3D=3D -EINTR || ret =3D=3D -EAGAIN) { + aio_add_sqe(luring_prep_sqe, req, &req->cqe_handler); + return; } - - luringcb =3D io_uring_cqe_get_data(cqes); - ret =3D cqes->res; - io_uring_cqe_seen(&s->ring, cqes); - cqes =3D NULL; - - /* Change counters one-by-one because we can be nested. */ - s->io_q.in_flight--; - trace_luring_process_completion(s, luringcb, ret); - + } else if (req->qiov) { /* total_read is non-zero only for resubmitted read requests */ - total_bytes =3D ret + luringcb->total_read; + int total_bytes =3D ret + req->total_read; =20 - if (ret < 0) { - /* - * Only writev/readv/fsync requests on regular files or host b= lock - * devices are submitted. Therefore -EAGAIN is not expected bu= t it's - * known to happen sometimes with Linux SCSI. Submit again and= hope - * the request completes successfully. - * - * For more information, see: - * https://lore.kernel.org/io-uring/20210727165811.284510-3-ax= boe@kernel.dk/T/#u - * - * If the code is changed to submit other types of requests in= the - * future, then this workaround may need to be extended to dea= l with - * genuine -EAGAIN results that should not be resubmitted - * immediately. - */ - if (ret =3D=3D -EINTR || ret =3D=3D -EAGAIN) { - luring_resubmit(s, luringcb); - continue; - } - } else if (!luringcb->qiov) { - goto end; - } else if (total_bytes =3D=3D luringcb->qiov->size) { + if (total_bytes =3D=3D req->qiov->size) { ret =3D 0; - /* Only read/write */ } else { /* Short Read/Write */ - if (luringcb->is_read) { + if (req->type =3D=3D QEMU_AIO_READ) { if (ret > 0) { - luring_resubmit_short_read(s, luringcb, ret); - continue; - } else { - /* Pad with zeroes */ - qemu_iovec_memset(luringcb->qiov, total_bytes, 0, - luringcb->qiov->size - total_bytes); - ret =3D 0; + luring_resubmit_short_read(req, ret); + return; } + + /* Pad with zeroes */ + qemu_iovec_memset(req->qiov, total_bytes, 0, + req->qiov->size - total_bytes); + ret =3D 0; } else { ret =3D -ENOSPC; } } -end: - luringcb->ret =3D ret; - qemu_iovec_destroy(&luringcb->resubmit_qiov); - - /* - * If the coroutine is already entered it must be in ioq_submit() - * and will notice luringcb->ret has been filled in when it - * eventually runs later. Coroutines cannot be entered recursively - * so avoid doing that! - */ - assert(luringcb->co->ctx =3D=3D s->aio_context); - if (!qemu_coroutine_entered(luringcb->co)) { - aio_co_wake(luringcb->co); - } } =20 - qemu_bh_cancel(s->completion_bh); + req->ret =3D ret; + qemu_iovec_destroy(&req->resubmit_qiov); =20 - defer_call_end(); -} - -static int ioq_submit(LuringState *s) -{ - int ret =3D 0; - LuringAIOCB *luringcb, *luringcb_next; - - while (s->io_q.in_queue > 0) { - /* - * Try to fetch sqes from the ring for requests waiting in - * the overflow queue - */ - QSIMPLEQ_FOREACH_SAFE(luringcb, &s->io_q.submit_queue, next, - luringcb_next) { - struct io_uring_sqe *sqes =3D io_uring_get_sqe(&s->ring); - if (!sqes) { - break; - } - /* Prep sqe for submission */ - *sqes =3D luringcb->sqeq; - QSIMPLEQ_REMOVE_HEAD(&s->io_q.submit_queue, next); - } - ret =3D io_uring_submit(&s->ring); - trace_luring_io_uring_submit(s, ret); - /* Prevent infinite loop if submission is refused */ - if (ret <=3D 0) { - if (ret =3D=3D -EAGAIN || ret =3D=3D -EINTR) { - continue; - } - break; - } - s->io_q.in_flight +=3D ret; - s->io_q.in_queue -=3D ret; - } - s->io_q.blocked =3D (s->io_q.in_queue > 0); - - if (s->io_q.in_flight) { - /* - * We can try to complete something just right away if there are - * still requests in-flight. - */ - luring_process_completions(s); - } - return ret; -} - -static void luring_process_completions_and_submit(LuringState *s) -{ - luring_process_completions(s); - - if (s->io_q.in_queue > 0) { - ioq_submit(s); + /* + * If the coroutine is already entered it must be in luring_co_submit(= ) and + * will notice req->ret has been filled in when it eventually runs lat= er. + * Coroutines cannot be entered recursively so avoid doing that! + */ + if (!qemu_coroutine_entered(req->co)) { + aio_co_wake(req->co); } } =20 -static void qemu_luring_completion_bh(void *opaque) +int coroutine_fn luring_co_submit(BlockDriverState *bs, int fd, + uint64_t offset, QEMUIOVector *qiov, + int type, BdrvRequestFlags flags) { - LuringState *s =3D opaque; - luring_process_completions_and_submit(s); -} - -static void qemu_luring_completion_cb(void *opaque) -{ - LuringState *s =3D opaque; - luring_process_completions_and_submit(s); -} - -static bool qemu_luring_poll_cb(void *opaque) -{ - LuringState *s =3D opaque; - - return io_uring_cq_ready(&s->ring); -} - -static void qemu_luring_poll_ready(void *opaque) -{ - LuringState *s =3D opaque; - - luring_process_completions_and_submit(s); -} - -static void ioq_init(LuringQueue *io_q) -{ - QSIMPLEQ_INIT(&io_q->submit_queue); - io_q->in_queue =3D 0; - io_q->in_flight =3D 0; - io_q->blocked =3D false; -} - -static void luring_deferred_fn(void *opaque) -{ - LuringState *s =3D opaque; - trace_luring_unplug_fn(s, s->io_q.blocked, s->io_q.in_queue, - s->io_q.in_flight); - if (!s->io_q.blocked && s->io_q.in_queue > 0) { - ioq_submit(s); - } -} - -/** - * luring_do_submit: - * @fd: file descriptor for I/O - * @luringcb: AIO control block - * @s: AIO state - * @offset: offset for request - * @type: type of request - * - * Fetches sqes from ring, adds to pending queue and preps them - * - */ -static int luring_do_submit(int fd, LuringAIOCB *luringcb, LuringState *s, - uint64_t offset, int type, BdrvRequestFlags fl= ags) -{ - int ret; - struct io_uring_sqe *sqes =3D &luringcb->sqeq; - - switch (type) { - case QEMU_AIO_WRITE: -#ifdef HAVE_IO_URING_PREP_WRITEV2 - { - int luring_flags =3D (flags & BDRV_REQ_FUA) ? RWF_DSYNC : 0; - io_uring_prep_writev2(sqes, fd, luringcb->qiov->iov, - luringcb->qiov->niov, offset, luring_flags); - } -#else - assert(flags =3D=3D 0); - io_uring_prep_writev(sqes, fd, luringcb->qiov->iov, - luringcb->qiov->niov, offset); -#endif - break; - case QEMU_AIO_ZONE_APPEND: - io_uring_prep_writev(sqes, fd, luringcb->qiov->iov, - luringcb->qiov->niov, offset); - break; - case QEMU_AIO_READ: - io_uring_prep_readv(sqes, fd, luringcb->qiov->iov, - luringcb->qiov->niov, offset); - break; - case QEMU_AIO_FLUSH: - io_uring_prep_fsync(sqes, fd, IORING_FSYNC_DATASYNC); - break; - default: - fprintf(stderr, "%s: invalid AIO request type, aborting 0x%x.\n", - __func__, type); - abort(); - } - io_uring_sqe_set_data(sqes, luringcb); - - QSIMPLEQ_INSERT_TAIL(&s->io_q.submit_queue, luringcb, next); - s->io_q.in_queue++; - trace_luring_do_submit(s, s->io_q.blocked, s->io_q.in_queue, - s->io_q.in_flight); - if (!s->io_q.blocked) { - if (s->io_q.in_flight + s->io_q.in_queue >=3D MAX_ENTRIES) { - ret =3D ioq_submit(s); - trace_luring_do_submit_done(s, ret); - return ret; - } - - defer_call(luring_deferred_fn, s); - } - return 0; -} - -int coroutine_fn luring_co_submit(BlockDriverState *bs, int fd, uint64_t o= ffset, - QEMUIOVector *qiov, int type, - BdrvRequestFlags flags) -{ - int ret; - AioContext *ctx =3D qemu_get_current_aio_context(); - LuringState *s =3D aio_get_linux_io_uring(ctx); - LuringAIOCB luringcb =3D { + LuringRequest req =3D { .co =3D qemu_coroutine_self(), - .ret =3D -EINPROGRESS, .qiov =3D qiov, - .is_read =3D (type =3D=3D QEMU_AIO_READ), + .ret =3D -EINPROGRESS, + .type =3D type, + .fd =3D fd, + .offset =3D offset, + .flags =3D flags, }; - trace_luring_co_submit(bs, s, &luringcb, fd, offset, qiov ? qiov->size= : 0, - type); - ret =3D luring_do_submit(fd, &luringcb, s, offset, type, flags); =20 - if (ret < 0) { - return ret; - } + req.cqe_handler.cb =3D luring_cqe_handler; =20 - if (luringcb.ret =3D=3D -EINPROGRESS) { + trace_luring_co_submit(bs, &req, fd, offset, qiov ? qiov->size : 0, ty= pe); + aio_add_sqe(luring_prep_sqe, &req, &req.cqe_handler); + + if (req.ret =3D=3D -EINPROGRESS) { qemu_coroutine_yield(); } - return luringcb.ret; -} - -void luring_detach_aio_context(LuringState *s, AioContext *old_context) -{ - aio_set_fd_handler(old_context, s->ring.ring_fd, - NULL, NULL, NULL, NULL, s); - qemu_bh_delete(s->completion_bh); - s->aio_context =3D NULL; -} - -void luring_attach_aio_context(LuringState *s, AioContext *new_context) -{ - s->aio_context =3D new_context; - s->completion_bh =3D aio_bh_new(new_context, qemu_luring_completion_bh= , s); - aio_set_fd_handler(s->aio_context, s->ring.ring_fd, - qemu_luring_completion_cb, NULL, - qemu_luring_poll_cb, qemu_luring_poll_ready, s); -} - -LuringState *luring_init(Error **errp) -{ - int rc; - LuringState *s =3D g_new0(LuringState, 1); - struct io_uring *ring =3D &s->ring; - - trace_luring_init_state(s, sizeof(*s)); - - rc =3D io_uring_queue_init(MAX_ENTRIES, ring, 0); - if (rc < 0) { - error_setg_errno(errp, -rc, "failed to init linux io_uring ring"); - g_free(s); - return NULL; - } - - ioq_init(&s->io_q); - return s; - -} - -void luring_cleanup(LuringState *s) -{ - io_uring_queue_exit(&s->ring); - trace_luring_cleanup_state(s); - g_free(s); + return req.ret; } =20 bool luring_has_fua(void) diff --git a/stubs/io_uring.c b/stubs/io_uring.c deleted file mode 100644 index 622d1e4648..0000000000 --- a/stubs/io_uring.c +++ /dev/null @@ -1,32 +0,0 @@ -/* - * Linux io_uring support. - * - * Copyright (C) 2009 IBM, Corp. - * Copyright (C) 2009 Red Hat, Inc. - * - * This work is licensed under the terms of the GNU GPL, version 2 or late= r. - * See the COPYING file in the top-level directory. - */ -#include "qemu/osdep.h" -#include "block/aio.h" -#include "block/raw-aio.h" - -void luring_detach_aio_context(LuringState *s, AioContext *old_context) -{ - abort(); -} - -void luring_attach_aio_context(LuringState *s, AioContext *new_context) -{ - abort(); -} - -LuringState *luring_init(Error **errp) -{ - abort(); -} - -void luring_cleanup(LuringState *s) -{ - abort(); -} diff --git a/util/async.c b/util/async.c index 11954f8931..4f8465978f 100644 --- a/util/async.c +++ b/util/async.c @@ -379,14 +379,6 @@ aio_ctx_finalize(GSource *source) } #endif =20 -#ifdef CONFIG_LINUX_IO_URING - if (ctx->linux_io_uring) { - luring_detach_aio_context(ctx->linux_io_uring, ctx); - luring_cleanup(ctx->linux_io_uring); - ctx->linux_io_uring =3D NULL; - } -#endif - assert(QSLIST_EMPTY(&ctx->scheduled_coroutines)); qemu_bh_delete(ctx->co_schedule_bh); =20 @@ -461,29 +453,6 @@ LinuxAioState *aio_get_linux_aio(AioContext *ctx) } #endif =20 -#ifdef CONFIG_LINUX_IO_URING -LuringState *aio_setup_linux_io_uring(AioContext *ctx, Error **errp) -{ - if (ctx->linux_io_uring) { - return ctx->linux_io_uring; - } - - ctx->linux_io_uring =3D luring_init(errp); - if (!ctx->linux_io_uring) { - return NULL; - } - - luring_attach_aio_context(ctx->linux_io_uring, ctx); - return ctx->linux_io_uring; -} - -LuringState *aio_get_linux_io_uring(AioContext *ctx) -{ - assert(ctx->linux_io_uring); - return ctx->linux_io_uring; -} -#endif - void aio_notify(AioContext *ctx) { /* @@ -600,10 +569,6 @@ AioContext *aio_context_new(Error **errp) ctx->linux_aio =3D NULL; #endif =20 -#ifdef CONFIG_LINUX_IO_URING - ctx->linux_io_uring =3D NULL; -#endif - ctx->thread_pool =3D NULL; qemu_rec_mutex_init(&ctx->lock); timerlistgroup_init(&ctx->tlg, aio_timerlist_notify, ctx); diff --git a/util/fdmon-io_uring.c b/util/fdmon-io_uring.c index a4523e3dcc..a880243ec9 100644 --- a/util/fdmon-io_uring.c +++ b/util/fdmon-io_uring.c @@ -48,6 +48,7 @@ #include "qemu/error-report.h" #include "qemu/rcu_queue.h" #include "aio-posix.h" +#include "trace.h" =20 enum { FDMON_IO_URING_ENTRIES =3D 128, /* sq/cq ring size */ @@ -174,6 +175,9 @@ static void fdmon_io_uring_add_sqe(AioContext *ctx, =20 prep_sqe(sqe, opaque); io_uring_sqe_set_data(sqe, cqe_handler); + + trace_fdmon_io_uring_add_sqe(ctx, opaque, sqe->opcode, sqe->fd, sqe->o= ff, + cqe_handler); } =20 static void fdmon_special_cqe_handler(CqeHandler *cqe_handler) @@ -290,6 +294,8 @@ static void cqe_handler_bh(void *opaque) =20 QSIMPLEQ_REMOVE_HEAD(ready_list, next); =20 + trace_fdmon_io_uring_cqe_handler(ctx, cqe_handler, + cqe_handler->cqe.res); cqe_handler->cb(cqe_handler); } =20 diff --git a/block/trace-events b/block/trace-events index 8e789e1f12..c9b4736ff8 100644 --- a/block/trace-events +++ b/block/trace-events @@ -62,15 +62,9 @@ qmp_block_stream(void *bs) "bs %p" file_paio_submit(void *acb, void *opaque, int64_t offset, int count, int t= ype) "acb %p opaque %p offset %"PRId64" count %d type %d" =20 # io_uring.c -luring_init_state(void *s, size_t size) "s %p size %zu" -luring_cleanup_state(void *s) "%p freed" -luring_unplug_fn(void *s, int blocked, int queued, int inflight) "LuringSt= ate %p blocked %d queued %d inflight %d" -luring_do_submit(void *s, int blocked, int queued, int inflight) "LuringSt= ate %p blocked %d queued %d inflight %d" -luring_do_submit_done(void *s, int ret) "LuringState %p submitted to kerne= l %d" -luring_co_submit(void *bs, void *s, void *luringcb, int fd, uint64_t offse= t, size_t nbytes, int type) "bs %p s %p luringcb %p fd %d offset %" PRId64 = " nbytes %zd type %d" -luring_process_completion(void *s, void *aiocb, int ret) "LuringState %p l= uringcb %p ret %d" -luring_io_uring_submit(void *s, int ret) "LuringState %p ret %d" -luring_resubmit_short_read(void *s, void *luringcb, int nread) "LuringStat= e %p luringcb %p nread %d" +luring_cqe_handler(void *req, int ret) "req %p ret %d" +luring_co_submit(void *bs, void *req, int fd, uint64_t offset, size_t nbyt= es, int type) "bs %p req %p fd %d offset %" PRId64 " nbytes %zd type %d" +luring_resubmit_short_read(void *req, int nread) "req %p nread %d" =20 # qcow2.c qcow2_add_task(void *co, void *bs, void *pool, const char *action, int clu= ster_type, uint64_t host_offset, uint64_t offset, uint64_t bytes, void *qio= v, size_t qiov_offset) "co %p bs %p pool %p: %s: cluster_type %d file_clust= er_offset %" PRIu64 " offset %" PRIu64 " bytes %" PRIu64 " qiov %p qiov_off= set %zu" diff --git a/stubs/meson.build b/stubs/meson.build index 63392f5e78..d157b06273 100644 --- a/stubs/meson.build +++ b/stubs/meson.build @@ -32,9 +32,6 @@ if have_block or have_ga stub_ss.add(files('cpus-virtual-clock.c')) stub_ss.add(files('icount.c')) stub_ss.add(files('graph-lock.c')) - if linux_io_uring.found() - stub_ss.add(files('io_uring.c')) - endif if libaio.found() stub_ss.add(files('linux-aio.c')) endif diff --git a/util/trace-events b/util/trace-events index bd8f25fb59..540d662507 100644 --- a/util/trace-events +++ b/util/trace-events @@ -24,6 +24,10 @@ buffer_move_empty(const char *buf, size_t len, const cha= r *from) "%s: %zd bytes buffer_move(const char *buf, size_t len, const char *from) "%s: %zd bytes = from %s" buffer_free(const char *buf, size_t len) "%s: capacity %zd" =20 +# fdmon-io_uring.c +fdmon_io_uring_add_sqe(void *ctx, void *opaque, int opcode, int fd, uint64= _t off, void *cqe_handler) "ctx %p opaque %p opcode %d fd %d off %"PRId64" = cqe_handler %p" +fdmon_io_uring_cqe_handler(void *ctx, void *cqe_handler, int cqe_res) "ctx= %p cqe_handler %p cqe_res %d" + # filemonitor-inotify.c qemu_file_monitor_add_watch(void *mon, const char *dirpath, const char *fi= lename, void *cb, void *opaque, int64_t id) "File monitor %p add watch dir= =3D'%s' file=3D'%s' cb=3D%p opaque=3D%p id=3D%" PRId64 qemu_file_monitor_remove_watch(void *mon, const char *dirpath, int64_t id)= "File monitor %p remove watch dir=3D'%s' id=3D%" PRId64 --=20 2.49.0