From nobody Wed Apr 16 07:44:31 2025 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1682428971; cv=none; d=zohomail.com; s=zohoarc; b=KPvImtQozZl/kj6AwflMk0tl9NIE9b3vPpmqA52LayqOca2atPPe/eqOUWAQWxkDbAQmQ3zbfd9vXUFk+wjFM9tZp8UvTz9XzqThcNGdZzRDiIEmjIqXkNSrDYgcnj+VO561+eJMBABhPg/IgNBpd1TtXvgy2pi+LD+p//vYsP8= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1682428971; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:To; bh=eOxD0B6t6Kjmv88wWP5XaU39InLoXf99vLZGPp2IDzA=; b=TNP6JOME/kRJyjMG6MSkEqt7A9GHdN/xAmslpBIVBihXeaje8TMYVYJ6RKWf13ffMwhVB0WxG8ZG8YPYjpwmYx43vQc1eMqtZVdLLZSimTRCkeK3jjF/6RjW3x/0PitakSRzrAC90nlPqwdqRZm86nhYtGxzdvWPHOQARwqFJOU= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1682428971874253.70015123477367; Tue, 25 Apr 2023 06:22:51 -0700 (PDT) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1prIYy-00029Y-CT; Tue, 25 Apr 2023 09:18:32 -0400 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1prIVT-0006rN-Uc for qemu-devel@nongnu.org; Tue, 25 Apr 2023 09:15:00 -0400 Received: from us-smtp-delivery-124.mimecast.com ([170.10.129.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1prIVG-0006an-KU for qemu-devel@nongnu.org; Tue, 25 Apr 2023 09:14:50 -0400 Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-27-5Es_dXVCM669woBBZbrTcw-1; Tue, 25 Apr 2023 09:14:18 -0400 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id AF0C3185A794; Tue, 25 Apr 2023 13:14:17 +0000 (UTC) Received: from merkur.redhat.com (unknown [10.39.193.228]) by smtp.corp.redhat.com (Postfix) with ESMTP id 004E240C2064; Tue, 25 Apr 2023 13:14:16 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1682428459; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=eOxD0B6t6Kjmv88wWP5XaU39InLoXf99vLZGPp2IDzA=; b=hVqbmSSRSXnQH1yto34bR1st7uEDi1tKOixRyM8Z3NsUsUHmE1ug+3ul9cnXwr+ArgSp7E XvuA0kwKaLRcAdycF0UTFpFkAOeC0xcLIYx1NyLg/INqPGdagvptryoywkJRaPjMz55+Q+ 6EsqJSMgG6yzs8txA9JYL/cn4sFlKNQ= X-MC-Unique: 5Es_dXVCM669woBBZbrTcw-1 From: Kevin Wolf To: qemu-block@nongnu.org Cc: kwolf@redhat.com, richard.henderson@linaro.org, qemu-devel@nongnu.org Subject: [PULL 14/25] thread-pool: use ThreadPool from the running thread Date: Tue, 25 Apr 2023 15:13:48 +0200 Message-Id: <20230425131359.259007-15-kwolf@redhat.com> In-Reply-To: <20230425131359.259007-1-kwolf@redhat.com> References: <20230425131359.259007-1-kwolf@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.1 on 10.11.54.1 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.129.124; envelope-from=kwolf@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -22 X-Spam_score: -2.3 X-Spam_bar: -- X-Spam_report: (-2.3 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.171, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H2=-0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1682428974062100006 Content-Type: text/plain; charset="utf-8" From: Emanuele Giuseppe Esposito Use qemu_get_current_aio_context() where possible, since we always submit work to the current thread anyways. We want to also be sure that the thread submitting the work is the same as the one processing the pool, to avoid adding synchronization to the pool list. Signed-off-by: Emanuele Giuseppe Esposito Message-Id: <20230203131731.851116-4-eesposit@redhat.com> Reviewed-by: Kevin Wolf Reviewed-by: Stefan Hajnoczi Signed-off-by: Kevin Wolf --- include/block/thread-pool.h | 5 +++++ block/file-posix.c | 21 ++++++++++----------- block/file-win32.c | 2 +- block/qcow2-threads.c | 2 +- util/thread-pool.c | 9 ++++----- 5 files changed, 21 insertions(+), 18 deletions(-) diff --git a/include/block/thread-pool.h b/include/block/thread-pool.h index 95ff2b0bdb..c408bde74c 100644 --- a/include/block/thread-pool.h +++ b/include/block/thread-pool.h @@ -29,12 +29,17 @@ typedef struct ThreadPool ThreadPool; ThreadPool *thread_pool_new(struct AioContext *ctx); void thread_pool_free(ThreadPool *pool); =20 +/* + * thread_pool_submit* API: submit I/O requests in the thread's + * current AioContext. + */ BlockAIOCB *thread_pool_submit_aio(ThreadPool *pool, ThreadPoolFunc *func, void *arg, BlockCompletionFunc *cb, void *opaque); int coroutine_fn thread_pool_submit_co(ThreadPool *pool, ThreadPoolFunc *func, void *arg); void thread_pool_submit(ThreadPool *pool, ThreadPoolFunc *func, void *arg); + void thread_pool_update_params(ThreadPool *pool, struct AioContext *ctx); =20 #endif diff --git a/block/file-posix.c b/block/file-posix.c index 30cb4ae421..173b3b1653 100644 --- a/block/file-posix.c +++ b/block/file-posix.c @@ -2040,11 +2040,10 @@ out: return result; } =20 -static int coroutine_fn raw_thread_pool_submit(BlockDriverState *bs, - ThreadPoolFunc func, void *= arg) +static int coroutine_fn raw_thread_pool_submit(ThreadPoolFunc func, void *= arg) { /* @bs can be NULL, bdrv_get_aio_context() returns the main context th= en */ - ThreadPool *pool =3D aio_get_thread_pool(bdrv_get_aio_context(bs)); + ThreadPool *pool =3D aio_get_thread_pool(qemu_get_current_aio_context(= )); return thread_pool_submit_co(pool, func, arg); } =20 @@ -2112,7 +2111,7 @@ static int coroutine_fn raw_co_prw(BlockDriverState *= bs, uint64_t offset, }; =20 assert(qiov->size =3D=3D bytes); - return raw_thread_pool_submit(bs, handle_aiocb_rw, &acb); + return raw_thread_pool_submit(handle_aiocb_rw, &acb); } =20 static int coroutine_fn raw_co_preadv(BlockDriverState *bs, int64_t offset, @@ -2181,7 +2180,7 @@ static int coroutine_fn raw_co_flush_to_disk(BlockDri= verState *bs) return luring_co_submit(bs, s->fd, 0, NULL, QEMU_AIO_FLUSH); } #endif - return raw_thread_pool_submit(bs, handle_aiocb_flush, &acb); + return raw_thread_pool_submit(handle_aiocb_flush, &acb); } =20 static void raw_aio_attach_aio_context(BlockDriverState *bs, @@ -2243,7 +2242,7 @@ raw_regular_truncate(BlockDriverState *bs, int fd, in= t64_t offset, }, }; =20 - return raw_thread_pool_submit(bs, handle_aiocb_truncate, &acb); + return raw_thread_pool_submit(handle_aiocb_truncate, &acb); } =20 static int coroutine_fn raw_co_truncate(BlockDriverState *bs, int64_t offs= et, @@ -2992,7 +2991,7 @@ raw_do_pdiscard(BlockDriverState *bs, int64_t offset,= int64_t bytes, acb.aio_type |=3D QEMU_AIO_BLKDEV; } =20 - ret =3D raw_thread_pool_submit(bs, handle_aiocb_discard, &acb); + ret =3D raw_thread_pool_submit(handle_aiocb_discard, &acb); raw_account_discard(s, bytes, ret); return ret; } @@ -3067,7 +3066,7 @@ raw_do_pwrite_zeroes(BlockDriverState *bs, int64_t of= fset, int64_t bytes, handler =3D handle_aiocb_write_zeroes; } =20 - return raw_thread_pool_submit(bs, handler, &acb); + return raw_thread_pool_submit(handler, &acb); } =20 static int coroutine_fn raw_co_pwrite_zeroes( @@ -3305,7 +3304,7 @@ raw_co_copy_range_to(BlockDriverState *bs, }, }; =20 - return raw_thread_pool_submit(bs, handle_aiocb_copy_range, &acb); + return raw_thread_pool_submit(handle_aiocb_copy_range, &acb); } =20 BlockDriver bdrv_file =3D { @@ -3635,7 +3634,7 @@ hdev_co_ioctl(BlockDriverState *bs, unsigned long int= req, void *buf) struct sg_io_hdr *io_hdr =3D buf; if (io_hdr->cmdp[0] =3D=3D PERSISTENT_RESERVE_OUT || io_hdr->cmdp[0] =3D=3D PERSISTENT_RESERVE_IN) { - return pr_manager_execute(s->pr_mgr, bdrv_get_aio_context(bs), + return pr_manager_execute(s->pr_mgr, qemu_get_current_aio_cont= ext(), s->fd, io_hdr); } } @@ -3651,7 +3650,7 @@ hdev_co_ioctl(BlockDriverState *bs, unsigned long int= req, void *buf) }, }; =20 - return raw_thread_pool_submit(bs, handle_aiocb_ioctl, &acb); + return raw_thread_pool_submit(handle_aiocb_ioctl, &acb); } #endif /* linux */ =20 diff --git a/block/file-win32.c b/block/file-win32.c index 1763b8662e..0aedb0875c 100644 --- a/block/file-win32.c +++ b/block/file-win32.c @@ -168,7 +168,7 @@ static BlockAIOCB *paio_submit(BlockDriverState *bs, HA= NDLE hfile, acb->aio_offset =3D offset; =20 trace_file_paio_submit(acb, opaque, offset, count, type); - pool =3D aio_get_thread_pool(bdrv_get_aio_context(bs)); + pool =3D aio_get_thread_pool(qemu_get_current_aio_context()); return thread_pool_submit_aio(pool, aio_worker, acb, cb, opaque); } =20 diff --git a/block/qcow2-threads.c b/block/qcow2-threads.c index 953bbe6df8..6d2e6b7bf4 100644 --- a/block/qcow2-threads.c +++ b/block/qcow2-threads.c @@ -43,7 +43,7 @@ qcow2_co_process(BlockDriverState *bs, ThreadPoolFunc *fu= nc, void *arg) { int ret; BDRVQcow2State *s =3D bs->opaque; - ThreadPool *pool =3D aio_get_thread_pool(bdrv_get_aio_context(bs)); + ThreadPool *pool =3D aio_get_thread_pool(qemu_get_current_aio_context(= )); =20 qemu_co_mutex_lock(&s->lock); while (s->nb_threads >=3D QCOW2_MAX_THREADS) { diff --git a/util/thread-pool.c b/util/thread-pool.c index 31113b5860..a70abb8a59 100644 --- a/util/thread-pool.c +++ b/util/thread-pool.c @@ -48,7 +48,7 @@ struct ThreadPoolElement { /* Access to this list is protected by lock. */ QTAILQ_ENTRY(ThreadPoolElement) reqs; =20 - /* Access to this list is protected by the global mutex. */ + /* This list is only written by the thread pool's mother thread. */ QLIST_ENTRY(ThreadPoolElement) all; }; =20 @@ -175,7 +175,6 @@ static void thread_pool_completion_bh(void *opaque) ThreadPool *pool =3D opaque; ThreadPoolElement *elem, *next; =20 - aio_context_acquire(pool->ctx); restart: QLIST_FOREACH_SAFE(elem, &pool->head, all, next) { if (elem->state !=3D THREAD_DONE) { @@ -195,9 +194,7 @@ restart: */ qemu_bh_schedule(pool->completion_bh); =20 - aio_context_release(pool->ctx); elem->common.cb(elem->common.opaque, elem->ret); - aio_context_acquire(pool->ctx); =20 /* We can safely cancel the completion_bh here regardless of s= omeone * else having scheduled it meanwhile because we reenter the @@ -211,7 +208,6 @@ restart: qemu_aio_unref(elem); } } - aio_context_release(pool->ctx); } =20 static void thread_pool_cancel(BlockAIOCB *acb) @@ -251,6 +247,9 @@ BlockAIOCB *thread_pool_submit_aio(ThreadPool *pool, { ThreadPoolElement *req; =20 + /* Assert that the thread submitting work is the same running the pool= */ + assert(pool->ctx =3D=3D qemu_get_current_aio_context()); + req =3D qemu_aio_get(&thread_pool_aiocb_info, NULL, cb, opaque); req->func =3D func; req->arg =3D arg; --=20 2.40.0