From nobody Mon Feb 9 20:32:45 2026 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 150221061579774.13875015263807; Tue, 8 Aug 2017 09:43:35 -0700 (PDT) Received: from localhost ([::1]:43638 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1df7bS-0000Yv-7C for importer@patchew.org; Tue, 08 Aug 2017 12:43:34 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:33136) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1df7O7-0004NF-7G for qemu-devel@nongnu.org; Tue, 08 Aug 2017 12:29:48 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1df7O3-0000dG-VF for qemu-devel@nongnu.org; Tue, 08 Aug 2017 12:29:47 -0400 Received: from mx1.redhat.com ([209.132.183.28]:52000) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1df7O3-0000ct-Ml for qemu-devel@nongnu.org; Tue, 08 Aug 2017 12:29:43 -0400 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id AC941C047B88 for ; Tue, 8 Aug 2017 16:29:42 +0000 (UTC) Received: from secure.mitica (ovpn-117-165.ams2.redhat.com [10.36.117.165]) by smtp.corp.redhat.com (Postfix) with ESMTP id 37ACF9128E; Tue, 8 Aug 2017 16:29:01 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com AC941C047B88 Authentication-Results: ext-mx07.extmail.prod.ext.phx2.redhat.com; dmarc=none (p=none dis=none) header.from=redhat.com Authentication-Results: ext-mx07.extmail.prod.ext.phx2.redhat.com; spf=fail smtp.mailfrom=quintela@redhat.com From: Juan Quintela To: qemu-devel@nongnu.org Date: Tue, 8 Aug 2017 18:26:23 +0200 Message-Id: <20170808162629.32493-14-quintela@redhat.com> In-Reply-To: <20170808162629.32493-1-quintela@redhat.com> References: <20170808162629.32493-1-quintela@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.31]); Tue, 08 Aug 2017 16:29:42 +0000 (UTC) X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 209.132.183.28 Subject: [Qemu-devel] [PATCH v6 13/19] migration: Really use multiple pages at a time X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: lvivier@redhat.com, dgilbert@redhat.com, peterx@redhat.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" We now send several pages at a time each time that we wakeup a thread. Signed-off-by: Juan Quintela -- Use iovec's instead of creating the equivalent. Clear memory used by pages (dave) Use g_new0(danp) define MULTIFD_CONTINUE --- migration/ram.c | 57 ++++++++++++++++++++++++++++++++++++++++++++++++-----= ---- 1 file changed, 48 insertions(+), 9 deletions(-) diff --git a/migration/ram.c b/migration/ram.c index 03f3427..7310da9 100644 --- a/migration/ram.c +++ b/migration/ram.c @@ -49,6 +49,7 @@ #include "migration/colo.h" #include "sysemu/sysemu.h" #include "qemu/uuid.h" +#include "qemu/iov.h" =20 /***********************************************************/ /* ram save/restore */ @@ -362,6 +363,15 @@ static void compress_threads_save_setup(void) =20 /* Multiple fd's */ =20 +/* used to continue on the same multifd group */ +#define MULTIFD_CONTINUE UINT16_MAX + +typedef struct { + int num; + size_t size; + struct iovec *iov; +} multifd_pages_t; + struct MultiFDSendParams { /* not changed */ uint8_t id; @@ -371,11 +381,7 @@ struct MultiFDSendParams { QemuMutex mutex; /* protected by param mutex */ bool quit; - /* This is a temp field. We are using it now to transmit - something the address of the page. Later in the series, we - change it for the real page. - */ - uint8_t *address; + multifd_pages_t pages; /* protected by multifd mutex */ /* has the thread finish the last submitted job */ bool done; @@ -388,8 +394,24 @@ struct { int count; QemuMutex mutex; QemuSemaphore sem; + multifd_pages_t pages; } *multifd_send_state; =20 +static void multifd_init_group(multifd_pages_t *pages) +{ + pages->num =3D 0; + pages->size =3D migrate_multifd_group(); + pages->iov =3D g_new0(struct iovec, pages->size); +} + +static void multifd_clear_group(multifd_pages_t *pages) +{ + pages->num =3D 0; + pages->size =3D 0; + g_free(pages->iov); + pages->iov =3D NULL; +} + static void terminate_multifd_send_threads(void) { int i; @@ -419,9 +441,11 @@ void multifd_save_cleanup(void) qemu_mutex_destroy(&p->mutex); qemu_sem_destroy(&p->sem); socket_send_channel_destroy(p->c); + multifd_clear_group(&p->pages); } g_free(multifd_send_state->params); multifd_send_state->params =3D NULL; + multifd_clear_group(&multifd_send_state->pages); g_free(multifd_send_state); multifd_send_state =3D NULL; } @@ -454,8 +478,8 @@ static void *multifd_send_thread(void *opaque) qemu_mutex_unlock(&p->mutex); break; } - if (p->address) { - p->address =3D 0; + if (p->pages.num) { + p->pages.num =3D 0; qemu_mutex_unlock(&p->mutex); qemu_mutex_lock(&multifd_send_state->mutex); p->done =3D true; @@ -484,6 +508,7 @@ int multifd_save_setup(void) multifd_send_state->count =3D 0; qemu_mutex_init(&multifd_send_state->mutex); qemu_sem_init(&multifd_send_state->sem, 0); + multifd_init_group(&multifd_send_state->pages); for (i =3D 0; i < thread_count; i++) { char thread_name[16]; MultiFDSendParams *p =3D &multifd_send_state->params[i]; @@ -493,7 +518,7 @@ int multifd_save_setup(void) p->quit =3D false; p->id =3D i; p->done =3D true; - p->address =3D 0; + multifd_init_group(&p->pages); p->c =3D socket_send_channel_create(); if (!p->c) { error_report("Error creating a send channel"); @@ -512,6 +537,17 @@ static uint16_t multifd_send_page(uint8_t *address, bo= ol last_page) { int i; MultiFDSendParams *p =3D NULL; /* make happy gcc */ + multifd_pages_t *pages =3D &multifd_send_state->pages; + + pages->iov[pages->num].iov_base =3D address; + pages->iov[pages->num].iov_len =3D TARGET_PAGE_SIZE; + pages->num++; + + if (!last_page) { + if (pages->num < (pages->size - 1)) { + return MULTIFD_CONTINUE; + } + } =20 qemu_sem_wait(&multifd_send_state->sem); qemu_mutex_lock(&multifd_send_state->mutex); @@ -525,7 +561,10 @@ static uint16_t multifd_send_page(uint8_t *address, bo= ol last_page) } qemu_mutex_unlock(&multifd_send_state->mutex); qemu_mutex_lock(&p->mutex); - p->address =3D address; + p->pages.num =3D pages->num; + iov_copy(p->pages.iov, pages->num, pages->iov, pages->num, 0, + iov_size(pages->iov, pages->num)); + pages->num =3D 0; qemu_mutex_unlock(&p->mutex); qemu_sem_post(&p->sem); =20 --=20 2.9.4