From nobody Wed Nov 27 00:34:56 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1703123431; cv=none; d=zohomail.com; s=zohoarc; b=evoZsHYW6P2XgTiPqu33hhWNqHBNMjXNA3qOo+939z+JzRsQ+cgtaqunKbuwgYxh0hy7NeUKfKpiPa5kFfZPMP4nvaqFX37lGPVnll0EcaD8w7dcNVAW9H/U0NoCHJtCx0qd13VZTBSkjb9g8mYNHmk6l/rGU4PlP3aNQWVUhdY= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1703123431; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=rGfjh/YlozNNkXclMsnlpr3zJG/93/87HIvPn823zgw=; b=GCTtP4h8f2Wjqp1OC9HSujdQC1aQaBteuUB98Jhv9uOh9AUMJQMAon2PCE8dBDZ9hoxkbvpnoVVXy8UvxhJHO/shtB2bOgQUSdLZsarF4EidZkrsbOOz0KtQ61r9NIHXNqcRgusDncQyuxAkTW+Dkhhtll2O9JK7M2YRLaaEleI= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1703123431481764.6380615689824; Wed, 20 Dec 2023 17:50:31 -0800 (PST) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1rG8Bs-0008A9-0z; Wed, 20 Dec 2023 20:49:36 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1rG8Bj-00088C-4U for qemu-devel@nongnu.org; Wed, 20 Dec 2023 20:49:28 -0500 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1rG8Bh-0002i5-5G for qemu-devel@nongnu.org; Wed, 20 Dec 2023 20:49:26 -0500 Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-612-Cl6mdu6cNPyxat6YUURKDQ-1; Wed, 20 Dec 2023 20:49:21 -0500 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 809638370E2; Thu, 21 Dec 2023 01:49:20 +0000 (UTC) Received: from localhost (unknown [10.39.192.79]) by smtp.corp.redhat.com (Postfix) with ESMTP id ED78A3C25; Thu, 21 Dec 2023 01:49:19 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1703123364; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=rGfjh/YlozNNkXclMsnlpr3zJG/93/87HIvPn823zgw=; b=NbdUktOlqUDRYhvjvUmpSbIfz0bVJ9pC0SaDhR6u3YnWi1/aHId07ZEoVqrSgEzKbVwmSw ifgqi+3HKnchQ1j8E2ziQZbsyYikcMM/WfPsoCLMx+vPHes5lEUt8xgSLWJiL3Oawr4rXb EpOVzAmahaOt/+FTcrYFY/SCAIafSfc= X-MC-Unique: Cl6mdu6cNPyxat6YUURKDQ-1 From: Stefan Hajnoczi To: Kevin Wolf , qemu-devel@nongnu.org Cc: Stefan Hajnoczi , Leonardo Bras , qemu-block@nongnu.org, Fam Zheng , Paolo Bonzini , Vladimir Sementsov-Ogievskiy , Fabiano Rosas , Eric Blake , Hanna Reitz , Juan Quintela , Peter Xu Subject: [PATCH 6/6] nbd/server: introduce NBDClient->lock to protect fields Date: Wed, 20 Dec 2023 20:49:03 -0500 Message-ID: <20231221014903.1537962-7-stefanha@redhat.com> In-Reply-To: <20231221014903.1537962-1-stefanha@redhat.com> References: <20231221014903.1537962-1-stefanha@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.1 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.133.124; envelope-from=stefanha@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -21 X-Spam_score: -2.2 X-Spam_bar: -- X-Spam_report: (-2.2 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-0.063, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H3=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1703123433663100012 Content-Type: text/plain; charset="utf-8" NBDClient has a number of fields that are accessed by both the export AioContext and the main loop thread. When the AioContext lock is removed these fields will need another form of protection. Add NBDClient->lock and protect fields that are accessed by both threads. Also add assertions where possible and otherwise add doc comments stating assumptions about which thread and lock holding. Note this patch moves the client->recv_coroutine assertion from nbd_co_receive_request() to nbd_trip() where client->lock is held. Signed-off-by: Stefan Hajnoczi --- nbd/server.c | 128 +++++++++++++++++++++++++++++++++++++-------------- 1 file changed, 94 insertions(+), 34 deletions(-) diff --git a/nbd/server.c b/nbd/server.c index 527fbdab4a..4008ec7df9 100644 --- a/nbd/server.c +++ b/nbd/server.c @@ -125,23 +125,25 @@ struct NBDClient { int refcount; /* atomic */ void (*close_fn)(NBDClient *client, bool negotiated); =20 + QemuMutex lock; + NBDExport *exp; QCryptoTLSCreds *tlscreds; char *tlsauthz; QIOChannelSocket *sioc; /* The underlying data channel */ QIOChannel *ioc; /* The current I/O channel which may differ (eg TLS) = */ =20 - Coroutine *recv_coroutine; + Coroutine *recv_coroutine; /* protected by lock */ =20 CoMutex send_lock; Coroutine *send_coroutine; =20 - bool read_yielding; - bool quiescing; + bool read_yielding; /* protected by lock */ + bool quiescing; /* protected by lock */ =20 QTAILQ_ENTRY(NBDClient) next; - int nb_requests; - bool closing; + int nb_requests; /* protected by lock */ + bool closing; /* protected by lock */ =20 uint32_t check_align; /* If non-zero, check for aligned client request= s */ =20 @@ -1415,11 +1417,18 @@ nbd_read_eof(NBDClient *client, void *buffer, size_= t size, Error **errp) =20 len =3D qio_channel_readv(client->ioc, &iov, 1, errp); if (len =3D=3D QIO_CHANNEL_ERR_BLOCK) { - client->read_yielding =3D true; + WITH_QEMU_LOCK_GUARD(&client->lock) { + if (client->quiescing) { + return -EAGAIN; + } + client->read_yielding =3D true; + } qio_channel_yield(client->ioc, G_IO_IN); - client->read_yielding =3D false; - if (client->quiescing) { - return -EAGAIN; + WITH_QEMU_LOCK_GUARD(&client->lock) { + client->read_yielding =3D false; + if (client->quiescing) { + return -EAGAIN; + } } continue; } else if (len < 0) { @@ -1528,6 +1537,7 @@ void nbd_client_put(NBDClient *client) blk_exp_unref(&client->exp->common); } g_free(client->contexts.bitmaps); + qemu_mutex_destroy(&client->lock); g_free(client); } } @@ -1536,11 +1546,13 @@ static void client_close(NBDClient *client, bool ne= gotiated) { assert(qemu_in_main_thread()); =20 - if (client->closing) { - return; - } + WITH_QEMU_LOCK_GUARD(&client->lock) { + if (client->closing) { + return; + } =20 - client->closing =3D true; + client->closing =3D true; + } =20 /* Force requests to finish. They will drop their own references, * then we'll close the socket and free the NBDClient. @@ -1554,6 +1566,7 @@ static void client_close(NBDClient *client, bool nego= tiated) } } =20 +/* Runs in export AioContext with client->lock held */ static NBDRequestData *nbd_request_get(NBDClient *client) { NBDRequestData *req; @@ -1566,6 +1579,7 @@ static NBDRequestData *nbd_request_get(NBDClient *cli= ent) return req; } =20 +/* Runs in export AioContext with client->lock held */ static void nbd_request_put(NBDRequestData *req) { NBDClient *client =3D req->client; @@ -1589,14 +1603,18 @@ static void blk_aio_attached(AioContext *ctx, void = *opaque) NBDExport *exp =3D opaque; NBDClient *client; =20 + assert(qemu_in_main_thread()); + trace_nbd_blk_aio_attached(exp->name, ctx); =20 exp->common.ctx =3D ctx; =20 QTAILQ_FOREACH(client, &exp->clients, next) { - assert(client->nb_requests =3D=3D 0); - assert(client->recv_coroutine =3D=3D NULL); - assert(client->send_coroutine =3D=3D NULL); + WITH_QEMU_LOCK_GUARD(&client->lock) { + assert(client->nb_requests =3D=3D 0); + assert(client->recv_coroutine =3D=3D NULL); + assert(client->send_coroutine =3D=3D NULL); + } } } =20 @@ -1604,6 +1622,8 @@ static void blk_aio_detach(void *opaque) { NBDExport *exp =3D opaque; =20 + assert(qemu_in_main_thread()); + trace_nbd_blk_aio_detach(exp->name, exp->common.ctx); =20 exp->common.ctx =3D NULL; @@ -1614,8 +1634,12 @@ static void nbd_drained_begin(void *opaque) NBDExport *exp =3D opaque; NBDClient *client; =20 + assert(qemu_in_main_thread()); + QTAILQ_FOREACH(client, &exp->clients, next) { - client->quiescing =3D true; + WITH_QEMU_LOCK_GUARD(&client->lock) { + client->quiescing =3D true; + } } } =20 @@ -1624,9 +1648,13 @@ static void nbd_drained_end(void *opaque) NBDExport *exp =3D opaque; NBDClient *client; =20 + assert(qemu_in_main_thread()); + QTAILQ_FOREACH(client, &exp->clients, next) { - client->quiescing =3D false; - nbd_client_receive_next_request(client); + WITH_QEMU_LOCK_GUARD(&client->lock) { + client->quiescing =3D false; + nbd_client_receive_next_request(client); + } } } =20 @@ -1635,17 +1663,21 @@ static bool nbd_drained_poll(void *opaque) NBDExport *exp =3D opaque; NBDClient *client; =20 + assert(qemu_in_main_thread()); + QTAILQ_FOREACH(client, &exp->clients, next) { - if (client->nb_requests !=3D 0) { - /* - * If there's a coroutine waiting for a request on nbd_read_eo= f() - * enter it here so we don't depend on the client to wake it u= p. - */ - if (client->recv_coroutine !=3D NULL && client->read_yielding)= { - qio_channel_wake_read(client->ioc); + WITH_QEMU_LOCK_GUARD(&client->lock) { + if (client->nb_requests !=3D 0) { + /* + * If there's a coroutine waiting for a request on nbd_rea= d_eof() + * enter it here so we don't depend on the client to wake = it up. + */ + if (client->recv_coroutine !=3D NULL && client->read_yield= ing) { + qio_channel_wake_read(client->ioc); + } + + return true; } - - return true; } } =20 @@ -1656,6 +1688,8 @@ static void nbd_eject_notifier(Notifier *n, void *dat= a) { NBDExport *exp =3D container_of(n, NBDExport, eject_notifier); =20 + assert(qemu_in_main_thread()); + blk_exp_request_shutdown(&exp->common); } =20 @@ -2541,7 +2575,6 @@ static int coroutine_fn nbd_co_receive_request(NBDReq= uestData *req, int ret; =20 g_assert(qemu_in_coroutine()); - assert(client->recv_coroutine =3D=3D qemu_coroutine_self()); ret =3D nbd_receive_request(client, request, errp); if (ret < 0) { return ret; @@ -2950,7 +2983,11 @@ static coroutine_fn void nbd_trip(void *opaque) */ =20 trace_nbd_trip(); + + qemu_mutex_lock(&client->lock); + if (client->closing) { + qemu_mutex_unlock(&client->lock); aio_co_reschedule_self(qemu_get_aio_context()); nbd_client_put(client); return; @@ -2961,15 +2998,24 @@ static coroutine_fn void nbd_trip(void *opaque) * We're switching between AIO contexts. Don't attempt to receive = a new * request and kick the main context which may be waiting for us. */ - aio_co_reschedule_self(qemu_get_aio_context()); - nbd_client_put(client); client->recv_coroutine =3D NULL; + qemu_mutex_unlock(&client->lock); aio_wait_kick(); + + aio_co_reschedule_self(qemu_get_aio_context()); + nbd_client_put(client); return; } =20 req =3D nbd_request_get(client); - ret =3D nbd_co_receive_request(req, &request, &local_err); + + do { + assert(client->recv_coroutine =3D=3D qemu_coroutine_self()); + qemu_mutex_unlock(&client->lock); + ret =3D nbd_co_receive_request(req, &request, &local_err); + qemu_mutex_lock(&client->lock); + } while (ret =3D=3D -EAGAIN && !client->quiescing); + client->recv_coroutine =3D NULL; =20 if (client->closing) { @@ -2981,11 +3027,13 @@ static coroutine_fn void nbd_trip(void *opaque) } =20 if (ret =3D=3D -EAGAIN) { - assert(client->quiescing); goto done; } =20 nbd_client_receive_next_request(client); + + qemu_mutex_unlock(&client->lock); + if (ret =3D=3D -EIO) { goto disconnect; } @@ -3024,8 +3072,10 @@ static coroutine_fn void nbd_trip(void *opaque) } =20 qio_channel_set_cork(client->ioc, false); + qemu_mutex_lock(&client->lock); done: nbd_request_put(req); + qemu_mutex_unlock(&client->lock); =20 aio_co_reschedule_self(qemu_get_aio_context()); nbd_client_put(client); @@ -3035,13 +3085,20 @@ disconnect: if (local_err) { error_reportf_err(local_err, "Disconnect client, due to: "); } + + qemu_mutex_lock(&client->lock); nbd_request_put(req); + qemu_mutex_unlock(&client->lock); =20 aio_co_reschedule_self(qemu_get_aio_context()); client_close(client, true); nbd_client_put(client); } =20 +/* + * Runs in export AioContext and main loop thread. Caller must hold + * client->lock. + */ static void nbd_client_receive_next_request(NBDClient *client) { if (!client->recv_coroutine && client->nb_requests < MAX_NBD_REQUESTS = && @@ -3067,7 +3124,9 @@ static coroutine_fn void nbd_co_client_start(void *op= aque) return; } =20 - nbd_client_receive_next_request(client); + WITH_QEMU_LOCK_GUARD(&client->lock) { + nbd_client_receive_next_request(client); + } } =20 /* @@ -3084,6 +3143,7 @@ void nbd_client_new(QIOChannelSocket *sioc, Coroutine *co; =20 client =3D g_new0(NBDClient, 1); + qemu_mutex_init(&client->lock); client->refcount =3D 1; client->tlscreds =3D tlscreds; if (tlscreds) { --=20 2.43.0