From nobody Sun Apr 28 05:23:22 2024 Delivered-To: importer@patchew.org Received-SPF: temperror (zoho.com: Error in retrieving data from DNS) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=temperror (zoho.com: Error in retrieving data from DNS) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1506421261659706.5590107856569; Tue, 26 Sep 2017 03:21:01 -0700 (PDT) Received: from localhost ([::1]:46516 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1dwmyw-0006X9-SM for importer@patchew.org; Tue, 26 Sep 2017 06:20:50 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:45328) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1dwmvL-000480-0i for qemu-devel@nongnu.org; Tue, 26 Sep 2017 06:17:10 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1dwmvH-0001cV-MG for qemu-devel@nongnu.org; Tue, 26 Sep 2017 06:17:07 -0400 Received: from mx-v6.kamp.de ([2a02:248:0:51::16]:40029 helo=mx01.kamp.de) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1dwmvH-0001b9-D2 for qemu-devel@nongnu.org; Tue, 26 Sep 2017 06:17:03 -0400 Received: (qmail 27297 invoked by uid 89); 26 Sep 2017 10:16:59 -0000 Received: from [195.62.97.28] by client-16-kamp (envelope-from , uid 89) with qmail-scanner-2010/03/19-MF (clamdscan: 0.99.2/23874. avast: 1.2.2/17010300. spamassassin: 3.4.1. Clear:RC:1(195.62.97.28):. Processed in 0.059167 secs); 26 Sep 2017 10:16:59 -0000 Received: from smtp.kamp.de (HELO submission.kamp.de) ([195.62.97.28]) by mx01.kamp.de with ESMTPS (DHE-RSA-AES256-GCM-SHA384 encrypted); 26 Sep 2017 10:16:58 -0000 Received: (qmail 29825 invoked from network); 26 Sep 2017 10:16:58 -0000 Received: from lieven-pc.kamp-intra.net (HELO lieven-pc) (relay@kamp.de@::ffff:172.21.12.60) by submission.kamp.de with ESMTPS (DHE-RSA-AES256-GCM-SHA384 encrypted) ESMTPA; 26 Sep 2017 10:16:58 -0000 Received: by lieven-pc (Postfix, from userid 1000) id EB73C207FB; Tue, 26 Sep 2017 12:16:57 +0200 (CEST) X-GL_Whitelist: yes From: Peter Lieven To: qemu-devel@nongnu.org Date: Tue, 26 Sep 2017 12:16:55 +0200 Message-Id: <1506421015-4737-1-git-send-email-pl@kamp.de> X-Mailer: git-send-email 1.9.1 X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] X-Received-From: 2a02:248:0:51::16 Subject: [Qemu-devel] [PATCH V2] migration: disable auto-converge during bulk block migration X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: famz@redhat.com, quintela@redhat.com, Peter Lieven , dgilbert@redhat.com, qemu-stable@nongnu.org, stefanha@redhat.com, pbonzini@redhat.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_6 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" auto-converge and block migration currently do not play well together. During block migration the auto-converge logic detects that ram migration makes no progress and thus throttles down the vm until it nearly stalls completely. Avoid this by disabling the throttling logic during the bulk phase of the block migration. Cc: qemu-stable@nongnu.org Signed-off-by: Peter Lieven --- V1->V2: add comment why we disable auto-converge during bulk block migratio= n [Stefan] migration/block.c | 5 +++++ migration/ram.c | 6 +++++- 2 files changed, 10 insertions(+), 1 deletion(-) diff --git a/migration/block.c b/migration/block.c index 9171f60..606ad4d 100644 --- a/migration/block.c +++ b/migration/block.c @@ -161,6 +161,11 @@ int blk_mig_active(void) return !QSIMPLEQ_EMPTY(&block_mig_state.bmds_list); } =20 +int blk_mig_bulk_active(void) +{ + return blk_mig_active() && !block_mig_state.bulk_completed; +} + uint64_t blk_mig_bytes_transferred(void) { BlkMigDevState *bmds; diff --git a/migration/ram.c b/migration/ram.c index 88ca69e..b83f897 100644 --- a/migration/ram.c +++ b/migration/ram.c @@ -46,6 +46,7 @@ #include "exec/ram_addr.h" #include "qemu/rcu_queue.h" #include "migration/colo.h" +#include "migration/block.h" =20 /***********************************************************/ /* ram save/restore */ @@ -825,7 +826,10 @@ static void migration_bitmap_sync(RAMState *rs) / (end_time - rs->time_last_bitmap_sync); bytes_xfer_now =3D ram_counters.transferred; =20 - if (migrate_auto_converge()) { + /* During block migration the auto-converge logic incorrectly dete= cts + * that ram migration makes no progress. Avoid this by disabling t= he + * throttling logic during the bulk phase of block migration. */ + if (migrate_auto_converge() && !blk_mig_bulk_active()) { /* The following detection logic can be refined later. For now: Check to see if the dirtied bytes is 50% more than the appr= ox. amount of bytes that just got transferred since the last ti= me we --=20 1.9.1