From nobody Thu Nov 6 16:23:15 2025 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; dkim=fail; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1542395045877169.70626981734915; Fri, 16 Nov 2018 11:04:05 -0800 (PST) Received: from localhost ([::1]:46436 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1gNjPQ-0007PY-Kp for importer@patchew.org; Fri, 16 Nov 2018 14:04:04 -0500 Received: from eggs.gnu.org ([2001:4830:134:3::10]:52630) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1gNjNj-0006c8-L2 for qemu-devel@nongnu.org; Fri, 16 Nov 2018 14:02:20 -0500 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1gNjNe-0004gY-5c for qemu-devel@nongnu.org; Fri, 16 Nov 2018 14:02:19 -0500 Received: from mail-wm1-x343.google.com ([2a00:1450:4864:20::343]:52665) by eggs.gnu.org with esmtps (TLS1.0:RSA_AES_128_CBC_SHA1:16) (Exim 4.71) (envelope-from ) id 1gNjNd-0004g2-Az for qemu-devel@nongnu.org; Fri, 16 Nov 2018 14:02:13 -0500 Received: by mail-wm1-x343.google.com with SMTP id r11-v6so23027602wmb.2 for ; Fri, 16 Nov 2018 11:02:12 -0800 (PST) Received: from mocramis-ultrabook.localdomain ([178.208.16.32]) by smtp.gmail.com with ESMTPSA id t4-v6sm17435978wrb.67.2018.11.16.11.02.10 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 16 Nov 2018 11:02:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=blade-group.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=hE5i4DNsMaOE714FwVXM6T7C0iqVIWRCZUM5Q+T7APA=; b=ETobIxcaDlzm0SYaZqkLA3jUks1cd0hdNW3lkICGofLHZUXcf9Gc0NQSpVPc1O0YxX UM7MkXIaIluW6i4lVTTZPhreP5YBzs6Qddb7PY71mtkQJGVa+986NTQumsyNtdSLm77l dnOF6AHJsRwJlw3kv3Wt+YCJxWPDfNGPFwtey+N8xQg3CapYdviJgmYT3v7nDMO9M3eP L2RcQecF3kPkKNVJO8wTZzwkCmCFqePQYEuJmdNGYRfeCwp4kfWa7jrl1pRiqXps+bSv 9sa4ugutxoDLLgXVxx+BO53kw5uzr8E/w8n5Lq2Lt1v1aneRmu9VYzWvi4TQodtdlVum RGyg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=hE5i4DNsMaOE714FwVXM6T7C0iqVIWRCZUM5Q+T7APA=; b=BABurpQ37GlmsAdc03RWyeMIaNFSRlDelb6dCBtW5+eiID7+msJZjyepTIUHMTZAIs uuk7SoY71M9++RvmQovh3kl+NR6wa/TJcVIbLbc5KaPhFKlIzDW7hQqm5gpqnYqUx96j 9SRlOFgEui0XJCR9lm9DVz5qZZBr78+NE3qEP+wC6CTs8+ivPdfl6u0TF2ehNpUPD1vl F9lABKyRv4JpYlb9ve1VdS1m12oRNbAOPOYZ0XgkvGH7ywDDbutYbqo2067W84v2QznB zE+9+xX+DtmALzCY47Kd6rLVVN7i70hk/MDEJ5rcNUqTHpW9GiZxkR0qsG6cELNbxThL ZkMA== X-Gm-Message-State: AGRZ1gKQHBSpYjeLX2KpdPoTWCPDW5+IVq6PduAvTyeFn1fW+BXSCh7R NCdz8pHFpBuQx3lQ3/9VVQvToKE/ZnI= X-Google-Smtp-Source: AJdET5ckxXGjGNIi2iV/8Kz81dpibmK9qOyZm7JSEwiedxsV+HXGRG+3wbpAJMPECkF6q6CwP61KKg== X-Received: by 2002:a1c:1c5:: with SMTP id 188mr3214570wmb.133.1542394931604; Fri, 16 Nov 2018 11:02:11 -0800 (PST) From: remy.noel@blade-group.com To: qemu-devel@nongnu.org Date: Fri, 16 Nov 2018 20:02:10 +0100 Message-Id: <20181116190211.17622-4-remy.noel@blade-group.com> X-Mailer: git-send-email 2.19.1 In-Reply-To: <20181116190211.17622-1-remy.noel@blade-group.com> References: <20181116190211.17622-1-remy.noel@blade-group.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 2a00:1450:4864:20::343 Subject: [Qemu-devel] aio: Do not use list_lock as a sync mechanism for aio_handlers anymore. X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kevin Wolf , Fam Zheng , "open list:Block I/O path" , Stefan Weil , Max Reitz , Remy Noel , Stefan Hajnoczi Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail-DKIM: fail (Header signature does not verify) Content-Type: text/plain; charset="utf-8" From: Remy Noel It is still used for bottom halves though and to avoid concurent set_fd_handlers (We could probably decorrelate the two, but set_fd_handlers are quite rare so it probably isn't worth it). Signed-off-by: Remy Noel --- include/block/aio.h | 4 +++- util/aio-posix.c | 20 -------------------- util/aio-win32.c | 9 --------- util/async.c | 7 +++++-- 4 files changed, 8 insertions(+), 32 deletions(-) diff --git a/include/block/aio.h b/include/block/aio.h index 0ca25dfec6..99c17a22f7 100644 --- a/include/block/aio.h +++ b/include/block/aio.h @@ -57,7 +57,9 @@ struct AioContext { /* Used by AioContext users to protect from multi-threaded access. */ QemuRecMutex lock; =20 - /* The list of registered AIO handlers. Protected by ctx->list_lock. = */ + /* The list of registered AIO handlers. + * RCU-enabled, writes rotected by ctx->list_lock. + */ QLIST_HEAD(, AioHandler) aio_handlers; =20 /* Used to avoid unnecessary event_notifier_set calls in aio_notify; diff --git a/util/aio-posix.c b/util/aio-posix.c index 83db3f65f4..46b7c571cc 100644 --- a/util/aio-posix.c +++ b/util/aio-posix.c @@ -341,7 +341,6 @@ static void poll_set_started(AioContext *ctx, bool star= ted) =20 ctx->poll_started =3D started; =20 - qemu_lockcnt_inc(&ctx->list_lock); rcu_read_lock(); QLIST_FOREACH_RCU(node, &ctx->aio_handlers, node) { IOHandler *fn; @@ -357,7 +356,6 @@ static void poll_set_started(AioContext *ctx, bool star= ted) } } rcu_read_unlock(); - qemu_lockcnt_dec(&ctx->list_lock); } =20 =20 @@ -374,12 +372,6 @@ bool aio_pending(AioContext *ctx) AioHandler *node; bool result =3D false; =20 - /* - * We have to walk very carefully in case aio_set_fd_handler is - * called while we're walking. - */ - qemu_lockcnt_inc(&ctx->list_lock); - rcu_read_lock(); QLIST_FOREACH_RCU(node, &ctx->aio_handlers, node) { int revents; @@ -397,7 +389,6 @@ bool aio_pending(AioContext *ctx) } } rcu_read_unlock(); - qemu_lockcnt_dec(&ctx->list_lock); =20 return result; } @@ -438,10 +429,8 @@ static bool aio_dispatch_handlers(AioContext *ctx) =20 void aio_dispatch(AioContext *ctx) { - qemu_lockcnt_inc(&ctx->list_lock); aio_bh_poll(ctx); aio_dispatch_handlers(ctx); - qemu_lockcnt_dec(&ctx->list_lock); =20 timerlistgroup_run_timers(&ctx->tlg); } @@ -524,8 +513,6 @@ static bool run_poll_handlers_once(AioContext *ctx, int= 64_t *timeout) * Note that ctx->notify_me must be non-zero so this function can detect * aio_notify(). * - * Note that the caller must have incremented ctx->list_lock. - * * Returns: true if progress was made, false otherwise */ static bool run_poll_handlers(AioContext *ctx, int64_t max_ns, int64_t *ti= meout) @@ -534,7 +521,6 @@ static bool run_poll_handlers(AioContext *ctx, int64_t = max_ns, int64_t *timeout) int64_t start_time, elapsed_time; =20 assert(ctx->notify_me); - assert(qemu_lockcnt_count(&ctx->list_lock) > 0); =20 trace_run_poll_handlers_begin(ctx, max_ns, *timeout); =20 @@ -563,8 +549,6 @@ static bool run_poll_handlers(AioContext *ctx, int64_t = max_ns, int64_t *timeout) * * ctx->notify_me must be non-zero so this function can detect aio_notify(= ). * - * Note that the caller must have incremented ctx->list_lock. - * * Returns: true if progress was made, false otherwise */ static bool try_poll_mode(AioContext *ctx, int64_t *timeout) @@ -609,8 +593,6 @@ bool aio_poll(AioContext *ctx, bool blocking) atomic_add(&ctx->notify_me, 2); } =20 - qemu_lockcnt_inc(&ctx->list_lock); - if (ctx->poll_max_ns) { start =3D qemu_clock_get_ns(QEMU_CLOCK_REALTIME); } @@ -713,8 +695,6 @@ bool aio_poll(AioContext *ctx, bool blocking) progress |=3D aio_dispatch_handlers(ctx); } =20 - qemu_lockcnt_dec(&ctx->list_lock); - progress |=3D timerlistgroup_run_timers(&ctx->tlg); =20 return progress; diff --git a/util/aio-win32.c b/util/aio-win32.c index d7c694e5ac..881bad0c28 100644 --- a/util/aio-win32.c +++ b/util/aio-win32.c @@ -176,7 +176,6 @@ bool aio_prepare(AioContext *ctx) * We have to walk very carefully in case aio_set_fd_handler is * called while we're walking. */ - qemu_lockcnt_inc(&ctx->list_lock); rcu_read_lock(); =20 /* fill fd sets */ @@ -206,7 +205,6 @@ bool aio_prepare(AioContext *ctx) } } rcu_read_unlock(); - qemu_lockcnt_dec(&ctx->list_lock); return have_select_revents; } =20 @@ -219,7 +217,6 @@ bool aio_pending(AioContext *ctx) * We have to walk very carefully in case aio_set_fd_handler is * called while we're walking. */ - qemu_lockcnt_inc(&ctx->list_lock); rcu_read_lock(); QLIST_FOREACH_RCU(node, &ctx->aio_handlers, node) { if (node->pfd.revents && node->io_notify) { @@ -238,7 +235,6 @@ bool aio_pending(AioContext *ctx) } =20 rcu_read_unlock(); - qemu_lockcnt_dec(&ctx->list_lock); return result; } =20 @@ -296,10 +292,8 @@ static bool aio_dispatch_handlers(AioContext *ctx, HAN= DLE event) =20 void aio_dispatch(AioContext *ctx) { - qemu_lockcnt_inc(&ctx->list_lock); aio_bh_poll(ctx); aio_dispatch_handlers(ctx, INVALID_HANDLE_VALUE); - qemu_lockcnt_dec(&ctx->list_lock); timerlistgroup_run_timers(&ctx->tlg); } =20 @@ -324,7 +318,6 @@ bool aio_poll(AioContext *ctx, bool blocking) atomic_add(&ctx->notify_me, 2); } =20 - qemu_lockcnt_inc(&ctx->list_lock); have_select_revents =3D aio_prepare(ctx); =20 /* fill fd sets */ @@ -381,8 +374,6 @@ bool aio_poll(AioContext *ctx, bool blocking) progress |=3D aio_dispatch_handlers(ctx, event); } while (count > 0); =20 - qemu_lockcnt_dec(&ctx->list_lock); - progress |=3D timerlistgroup_run_timers(&ctx->tlg); return progress; } diff --git a/util/async.c b/util/async.c index c10642a385..5078deed83 100644 --- a/util/async.c +++ b/util/async.c @@ -100,6 +100,8 @@ int aio_bh_poll(AioContext *ctx) int ret; bool deleted =3D false; =20 + qemu_lockcnt_inc(&ctx->list_lock); + ret =3D 0; for (bh =3D atomic_rcu_read(&ctx->first_bh); bh; bh =3D next) { next =3D atomic_rcu_read(&bh->next); @@ -124,10 +126,11 @@ int aio_bh_poll(AioContext *ctx) =20 /* remove deleted bhs */ if (!deleted) { + qemu_lockcnt_dec(&ctx->list_lock); return ret; } =20 - if (qemu_lockcnt_dec_if_lock(&ctx->list_lock)) { + if (qemu_lockcnt_dec_and_lock(&ctx->list_lock)) { bhp =3D &ctx->first_bh; while (*bhp) { bh =3D *bhp; @@ -138,7 +141,7 @@ int aio_bh_poll(AioContext *ctx) bhp =3D &bh->next; } } - qemu_lockcnt_inc_and_unlock(&ctx->list_lock); + qemu_lockcnt_unlock(&ctx->list_lock); } return ret; } --=20 2.19.1