From nobody Thu May 16 16:17:31 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) client-ip=209.51.188.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zohomail.com; spf=pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail(p=none dis=none) header.from=linux.intel.com ARC-Seal: i=1; a=rsa-sha256; t=1559629224; cv=none; d=zoho.com; s=zohoarc; b=Kn0eh/gZXT3CKg9ZnxGhlUEPh09NGjIFH8kCD0sqsdchUrHR5HdO6h5B2AOqS4AAyZ7RXornIcJiIkUGpPICLh8j/ToUMJhGdltIuO08BTMIKWw+1DvteIGUoxkUDGDy3/59vhCjX8FWkR+1hKCHbIc96sBB2YcUs6SgNey3+Hs= ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zoho.com; s=zohoarc; t=1559629224; h=Content-Transfer-Encoding:Cc:Date:From:List-Subscribe:List-Post:List-Id:List-Archive:List-Help:List-Unsubscribe:MIME-Version:Message-ID:Sender:Subject:To:ARC-Authentication-Results; bh=SVxgqW9AgQcmL1PrQ4BFypggHqodpgqvyc9CXPNY7+8=; b=KjoN74wwkiQ6y3Kqsg5gd4UG+Xw18UVN3qHCaM99yARLNVfT1Clzxs7EX98F7lp7ig+OgpfKx8pQ64BkTIVkBDwE24tXY+XOvJkKaCVHwxVDHsUsGux6l8pcvgJtMRhAchjq5nEpIRJ9zdzf53Olztb5P3SRFiX/JP8xoOS36Uo= ARC-Authentication-Results: i=1; mx.zoho.com; spf=pass (zoho.com: domain of gnu.org designates 209.51.188.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; dmarc=fail header.from= (p=none dis=none) header.from= Return-Path: Received: from lists.gnu.org (209.51.188.17 [209.51.188.17]) by mx.zohomail.com with SMTPS id 1559629224323119.13079706894109; Mon, 3 Jun 2019 23:20:24 -0700 (PDT) Received: from localhost ([127.0.0.1]:46561 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1hY2nr-0007P3-ST for importer@patchew.org; Tue, 04 Jun 2019 02:20:11 -0400 Received: from eggs.gnu.org ([209.51.188.92]:55472) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1hY2ln-0006UJ-5b for qemu-devel@nongnu.org; Tue, 04 Jun 2019 02:18:04 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1hY2lm-0000hp-3i for qemu-devel@nongnu.org; Tue, 04 Jun 2019 02:18:03 -0400 Received: from mga05.intel.com ([192.55.52.43]:60784) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1hY2lk-0000Tb-OB for qemu-devel@nongnu.org; Tue, 04 Jun 2019 02:18:01 -0400 Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga105.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 03 Jun 2019 23:17:58 -0700 Received: from richard.sh.intel.com (HELO localhost) ([10.239.159.54]) by fmsmga005.fm.intel.com with ESMTP; 03 Jun 2019 23:17:57 -0700 X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False X-ExtLoop1: 1 From: Wei Yang To: qemu-devel@nongnu.org Date: Tue, 4 Jun 2019 14:17:27 +0800 Message-Id: <20190604061727.6857-1-richardw.yang@linux.intel.com> X-Mailer: git-send-email 2.19.1 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-detected-operating-system: by eggs.gnu.org: Genre and OS details not recognized. X-Received-From: 192.55.52.43 Subject: [Qemu-devel] [PATCH v3] migratioin/ram: leave RAMBlock->bmap blank on allocating X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Wei Yang , dgilbert@redhat.com, peterx@redhat.com, quintela@redhat.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" Content-Type: text/plain; charset="utf-8" During migration, we would sync bitmap from ram_list.dirty_memory to RAMBlock.bmap in cpu_physical_memory_sync_dirty_bitmap(). Since we set RAMBlock.bmap and ram_list.dirty_memory both to all 1, this means at the first round this sync is meaningless and is a duplicated work. Leaving RAMBlock->bmap blank on allocating would have a side effect on migration_dirty_pages, since it is calculated from the result of cpu_physical_memory_sync_dirty_bitmap(). To keep it right, we need to set migration_dirty_pages to 0 in ram_state_init(). Signed-off-by: Wei Yang Reviewed-by: Dr. David Alan Gilbert Acked-by: Peter Xu --- v3: adjust comment based on Peter's comments v2: add a comment explaining why leaving RAMBlock.bmap clear --- migration/ram.c | 18 +++++++++++++----- 1 file changed, 13 insertions(+), 5 deletions(-) diff --git a/migration/ram.c b/migration/ram.c index 4c60869226..c3ece382ae 100644 --- a/migration/ram.c +++ b/migration/ram.c @@ -3182,11 +3182,11 @@ static int ram_state_init(RAMState **rsp) QSIMPLEQ_INIT(&(*rsp)->src_page_requests); =20 /* - * Count the total number of pages used by ram blocks not including any - * gaps due to alignment or unplugs. + * This must match with the initial values of dirty bitmap. + * Currently we initialize the dirty bitmap to all zeros so + * here the total dirty page count is zero. */ - (*rsp)->migration_dirty_pages =3D ram_bytes_total() >> TARGET_PAGE_BIT= S; - + (*rsp)->migration_dirty_pages =3D 0; ram_state_reset(*rsp); =20 return 0; @@ -3201,8 +3201,16 @@ static void ram_list_init_bitmaps(void) if (ram_bytes_total()) { RAMBLOCK_FOREACH_NOT_IGNORED(block) { pages =3D block->max_length >> TARGET_PAGE_BITS; + /* + * The initial dirty bitmap for migration must be set with all + * ones to make sure we'll migrate every guest RAM page to + * destination. + * Here we didn't set RAMBlock.bmap simply because it is alrea= dy + * set in ram_list.dirty_memory[DIRTY_MEMORY_MIGRATION] in + * ram_block_add, and that's where we'll sync the dirty bitmap= s. + * Here setting RAMBlock.bmap would be fine too but not necess= ary. + */ block->bmap =3D bitmap_new(pages); - bitmap_set(block->bmap, 0, pages); if (migrate_postcopy_ram()) { block->unsentmap =3D bitmap_new(pages); bitmap_set(block->unsentmap, 0, pages); --=20 2.19.1