From nobody Tue Nov 26 16:41:03 2024 Delivered-To: importer@patchew.org Authentication-Results: mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass(p=none dis=none) header.from=redhat.com ARC-Seal: i=1; a=rsa-sha256; t=1706697142; cv=none; d=zohomail.com; s=zohoarc; b=MpCpkwqRgqpGlYpakKoPj3Uh43YXWiqJhg98/v5Uc1QRO3J5k5eQ3NnLy7+ABw9dF1okgFLhPLQzqOkm1fwQSIo7NbkHmcSa1nzfHQXfBXNzWtQDRnxE1bqpWQV2RKSjdNORLhijLKgY1Yq3pppbHq6mt9VDrMTTMfBk+WWugDo= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; t=1706697142; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:References:Sender:Subject:Subject:To:To:Message-Id:Reply-To; bh=a9GwF5wr+IkbZjgGSvzhOPqUroVN/4alXq1fUUQ0DrU=; b=NN6BR0awj7CyKao8ugZSPHYDXt6U8GCGCseLNcEG4weZq+Uv7V5wNnwFxEa/ZU9KkTdgGxQr/0r7Mq7yFJ+S9iJBsB1O8sEf6j7E8IruGEh0pWVLd8iigLyskLEAor9gmE0dlsF32L0BmqgWd8eINMJl3sYCVCy/7ihKp8I+B/Y= ARC-Authentication-Results: i=1; mx.zohomail.com; dkim=pass; spf=pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=pass header.from= (p=none dis=none) Return-Path: Received: from lists.gnu.org (lists.gnu.org [209.51.188.17]) by mx.zohomail.com with SMTPS id 1706697142593539.1237160372257; Wed, 31 Jan 2024 02:32:22 -0800 (PST) Received: from localhost ([::1] helo=lists1p.gnu.org) by lists.gnu.org with esmtp (Exim 4.90_1) (envelope-from ) id 1rV7sb-00056D-9v; Wed, 31 Jan 2024 05:31:41 -0500 Received: from eggs.gnu.org ([2001:470:142:3::10]) by lists.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1rV7sY-000560-CW for qemu-devel@nongnu.org; Wed, 31 Jan 2024 05:31:38 -0500 Received: from us-smtp-delivery-124.mimecast.com ([170.10.133.124]) by eggs.gnu.org with esmtps (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.90_1) (envelope-from ) id 1rV7sW-0005bm-JD for qemu-devel@nongnu.org; Wed, 31 Jan 2024 05:31:38 -0500 Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-582-z-phsNPsMR2m5wVdrVohHA-1; Wed, 31 Jan 2024 05:31:33 -0500 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.rdu2.redhat.com [10.11.54.5]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 79D343813F2D; Wed, 31 Jan 2024 10:31:32 +0000 (UTC) Received: from x1n.redhat.com (unknown [10.72.116.11]) by smtp.corp.redhat.com (Postfix) with ESMTP id B5752107BD; Wed, 31 Jan 2024 10:31:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1706697095; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=a9GwF5wr+IkbZjgGSvzhOPqUroVN/4alXq1fUUQ0DrU=; b=bVaQyuJD8ZkRskgSQvRYWUcsdgN/7/pR2S77jYr5JeNyWNlIIaXRl4pb1L7X7GPnbqppsd 0uYKY9RjZh9AT2QzeDxcJPwSY07DjEEYaQusMiVVR3z1/eTCid6Ecl9kxNHveetQPfkj+S fobFrOfwKi3ta2/g8hftVH46xhX0GAo= X-MC-Unique: z-phsNPsMR2m5wVdrVohHA-1 From: peterx@redhat.com To: qemu-devel@nongnu.org Cc: Bryan Zhang , Prasad Pandit , Fabiano Rosas , peterx@redhat.com, Yuan Liu , Avihai Horon , Hao Xiang Subject: [PATCH 03/14] migration/multifd: Drop MultiFDSendParams.quit, cleanup error paths Date: Wed, 31 Jan 2024 18:31:00 +0800 Message-ID: <20240131103111.306523-4-peterx@redhat.com> In-Reply-To: <20240131103111.306523-1-peterx@redhat.com> References: <20240131103111.306523-1-peterx@redhat.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.5 Received-SPF: pass (zohomail.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Received-SPF: pass client-ip=170.10.133.124; envelope-from=peterx@redhat.com; helo=us-smtp-delivery-124.mimecast.com X-Spam_score_int: -33 X-Spam_score: -3.4 X-Spam_bar: --- X-Spam_report: (-3.4 / 5.0 requ) BAYES_00=-1.9, DKIMWL_WL_HIGH=-1.292, DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, DKIM_VALID_EF=-0.1, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H3=0.001, RCVD_IN_MSPIKE_WL=0.001, SPF_HELO_NONE=0.001, SPF_PASS=-0.001, T_SCC_BODY_TEXT_LINE=-0.01 autolearn=ham autolearn_force=no X-Spam_action: no action X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: qemu-devel-bounces+importer=patchew.org@nongnu.org X-ZohoMail-DKIM: pass (identity @redhat.com) X-ZM-MESSAGEID: 1706697144846100003 Content-Type: text/plain; charset="utf-8" From: Peter Xu Multifd send side has two fields to indicate error quits: - MultiFDSendParams.quit - &multifd_send_state->exiting Merge them into the global one. The replacement is done by changing all p->quit checks into the global var check. The global check doesn't need any lock. A few more things done on top of this altogether: - multifd_send_terminate_threads() Moving the xchg() of &multifd_send_state->exiting upper, so as to cover the tracepoint, migrate_set_error() and migrate_set_state(). - multifd_send_sync_main() In the 2nd loop, add one more check over the global var to make sure we don't keep the looping if QEMU already decided to quit. - multifd_tls_outgoing_handshake() Use multifd_send_terminate_threads() to set the error state. That has a benefit of updating MigrationState.error to that error too, so we can persist that 1st error we hit in that specific channel. - multifd_new_send_channel_async() Take similar approach like above, drop the migrate_set_error() because multifd_send_terminate_threads() already covers that. Unwrap the helper multifd_new_send_channel_cleanup() along the way; not really needed. Signed-off-by: Peter Xu Suggested-by: Fabiano Rosas --- migration/multifd.h | 2 -- migration/multifd.c | 85 ++++++++++++++++++--------------------------- 2 files changed, 33 insertions(+), 54 deletions(-) diff --git a/migration/multifd.h b/migration/multifd.h index 35d11f103c..7c040cb85a 100644 --- a/migration/multifd.h +++ b/migration/multifd.h @@ -95,8 +95,6 @@ typedef struct { QemuMutex mutex; /* is this channel thread running */ bool running; - /* should this thread finish */ - bool quit; /* multifd flags for each packet */ uint32_t flags; /* global number of generated multifd packets */ diff --git a/migration/multifd.c b/migration/multifd.c index b8d2c96533..2c98023d67 100644 --- a/migration/multifd.c +++ b/migration/multifd.c @@ -372,6 +372,11 @@ struct { MultiFDMethods *ops; } *multifd_send_state; =20 +static bool multifd_send_should_exit(void) +{ + return qatomic_read(&multifd_send_state->exiting); +} + /* * The migration thread can wait on either of the two semaphores. This * function can be used to kick the main thread out of waiting on either of @@ -409,7 +414,7 @@ static int multifd_send_pages(void) MultiFDSendParams *p =3D NULL; /* make happy gcc */ MultiFDPages_t *pages =3D multifd_send_state->pages; =20 - if (qatomic_read(&multifd_send_state->exiting)) { + if (multifd_send_should_exit()) { return -1; } =20 @@ -421,14 +426,11 @@ static int multifd_send_pages(void) */ next_channel %=3D migrate_multifd_channels(); for (i =3D next_channel;; i =3D (i + 1) % migrate_multifd_channels()) { - p =3D &multifd_send_state->params[i]; - - qemu_mutex_lock(&p->mutex); - if (p->quit) { - error_report("%s: channel %d has already quit!", __func__, i); - qemu_mutex_unlock(&p->mutex); + if (multifd_send_should_exit()) { return -1; } + p =3D &multifd_send_state->params[i]; + qemu_mutex_lock(&p->mutex); if (!p->pending_job) { p->pending_job++; next_channel =3D (i + 1) % migrate_multifd_channels(); @@ -483,6 +485,16 @@ static void multifd_send_terminate_threads(Error *err) { int i; =20 + /* + * We don't want to exit each threads twice. Depending on where + * we get the error, or if there are two independent errors in two + * threads at the same time, we can end calling this function + * twice. + */ + if (qatomic_xchg(&multifd_send_state->exiting, 1)) { + return; + } + trace_multifd_send_terminate_threads(err !=3D NULL); =20 if (err) { @@ -497,26 +509,13 @@ static void multifd_send_terminate_threads(Error *err) } } =20 - /* - * We don't want to exit each threads twice. Depending on where - * we get the error, or if there are two independent errors in two - * threads at the same time, we can end calling this function - * twice. - */ - if (qatomic_xchg(&multifd_send_state->exiting, 1)) { - return; - } - for (i =3D 0; i < migrate_multifd_channels(); i++) { MultiFDSendParams *p =3D &multifd_send_state->params[i]; =20 - qemu_mutex_lock(&p->mutex); - p->quit =3D true; qemu_sem_post(&p->sem); if (p->c) { qio_channel_shutdown(p->c, QIO_CHANNEL_SHUTDOWN_BOTH, NULL); } - qemu_mutex_unlock(&p->mutex); } } =20 @@ -615,16 +614,13 @@ int multifd_send_sync_main(void) for (i =3D 0; i < migrate_multifd_channels(); i++) { MultiFDSendParams *p =3D &multifd_send_state->params[i]; =20 - trace_multifd_send_sync_main_signal(p->id); - - qemu_mutex_lock(&p->mutex); - - if (p->quit) { - error_report("%s: channel %d has already quit", __func__, i); - qemu_mutex_unlock(&p->mutex); + if (multifd_send_should_exit()) { return -1; } =20 + trace_multifd_send_sync_main_signal(p->id); + + qemu_mutex_lock(&p->mutex); p->packet_num =3D multifd_send_state->packet_num++; p->flags |=3D MULTIFD_FLAG_SYNC; p->pending_job++; @@ -634,6 +630,10 @@ int multifd_send_sync_main(void) for (i =3D 0; i < migrate_multifd_channels(); i++) { MultiFDSendParams *p =3D &multifd_send_state->params[i]; =20 + if (multifd_send_should_exit()) { + return -1; + } + qemu_sem_wait(&multifd_send_state->channels_ready); trace_multifd_send_sync_main_wait(p->id); qemu_sem_wait(&p->sem_sync); @@ -671,7 +671,7 @@ static void *multifd_send_thread(void *opaque) qemu_sem_post(&multifd_send_state->channels_ready); qemu_sem_wait(&p->sem); =20 - if (qatomic_read(&multifd_send_state->exiting)) { + if (multifd_send_should_exit()) { break; } qemu_mutex_lock(&p->mutex); @@ -786,12 +786,7 @@ static void multifd_tls_outgoing_handshake(QIOTask *ta= sk, =20 trace_multifd_tls_outgoing_handshake_error(ioc, error_get_pretty(err)); =20 - migrate_set_error(migrate_get_current(), err); - /* - * Error happen, mark multifd_send_thread status as 'quit' although it - * is not created, and then tell who pay attention to me. - */ - p->quit =3D true; + multifd_send_terminate_threads(err); multifd_send_kick_main(p); error_free(err); } @@ -857,22 +852,6 @@ static bool multifd_channel_connect(MultiFDSendParams = *p, return true; } =20 -static void multifd_new_send_channel_cleanup(MultiFDSendParams *p, - QIOChannel *ioc, Error *err) -{ - migrate_set_error(migrate_get_current(), err); - /* Error happen, we need to tell who pay attention to me */ - multifd_send_kick_main(p); - /* - * Although multifd_send_thread is not created, but main migration - * thread need to judge whether it is running, so we need to mark - * its status. - */ - p->quit =3D true; - object_unref(OBJECT(ioc)); - error_free(err); -} - static void multifd_new_send_channel_async(QIOTask *task, gpointer opaque) { MultiFDSendParams *p =3D opaque; @@ -889,7 +868,10 @@ static void multifd_new_send_channel_async(QIOTask *ta= sk, gpointer opaque) } =20 trace_multifd_new_send_channel_async_error(p->id, local_err); - multifd_new_send_channel_cleanup(p, ioc, local_err); + multifd_send_terminate_threads(local_err); + multifd_send_kick_main(p); + object_unref(OBJECT(ioc)); + error_free(local_err); } =20 static void multifd_new_send_channel_create(gpointer opaque) @@ -921,7 +903,6 @@ int multifd_save_setup(Error **errp) qemu_mutex_init(&p->mutex); qemu_sem_init(&p->sem, 0); qemu_sem_init(&p->sem_sync, 0); - p->quit =3D false; p->pending_job =3D 0; p->id =3D i; p->pages =3D multifd_pages_init(page_count); --=20 2.43.0