From nobody Sun Apr 28 19:47:38 2024 Delivered-To: importer@patchew.org Received-SPF: pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) client-ip=208.118.235.17; envelope-from=qemu-devel-bounces+importer=patchew.org@nongnu.org; helo=lists.gnu.org; Authentication-Results: mx.zoho.com; spf=pass (zoho.com: domain of gnu.org designates 208.118.235.17 as permitted sender) smtp.mailfrom=qemu-devel-bounces+importer=patchew.org@nongnu.org; Return-Path: Received: from lists.gnu.org (lists.gnu.org [208.118.235.17]) by mx.zohomail.com with SMTPS id 1493192907657748.7521042716004; Wed, 26 Apr 2017 00:48:27 -0700 (PDT) Received: from localhost ([::1]:53317 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1d3HgY-0004f6-2i for importer@patchew.org; Wed, 26 Apr 2017 03:48:26 -0400 Received: from eggs.gnu.org ([2001:4830:134:3::10]:35324) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1d3HRZ-0008He-OX for qemu-devel@nongnu.org; Wed, 26 Apr 2017 03:33:04 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1d3HRX-0003ue-2b for qemu-devel@nongnu.org; Wed, 26 Apr 2017 03:32:57 -0400 Received: from mx1.redhat.com ([209.132.183.28]:59038) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_256_CBC_SHA1:32) (Exim 4.71) (envelope-from ) id 1d3HRW-0003uG-Pc for qemu-devel@nongnu.org; Wed, 26 Apr 2017 03:32:55 -0400 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id BC22C80F95 for ; Wed, 26 Apr 2017 07:32:53 +0000 (UTC) Received: from secure.mitica (unknown [10.36.118.31]) by smtp.corp.redhat.com (Postfix) with ESMTP id E76F98AC05; Wed, 26 Apr 2017 07:32:51 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mx1.redhat.com BC22C80F95 Authentication-Results: ext-mx03.extmail.prod.ext.phx2.redhat.com; dmarc=none (p=none dis=none) header.from=redhat.com Authentication-Results: ext-mx03.extmail.prod.ext.phx2.redhat.com; spf=pass smtp.mailfrom=quintela@redhat.com DKIM-Filter: OpenDKIM Filter v2.11.0 mx1.redhat.com BC22C80F95 From: Juan Quintela To: qemu-devel@nongnu.org Date: Wed, 26 Apr 2017 09:32:47 +0200 Message-Id: <20170426073247.7441-2-quintela@redhat.com> In-Reply-To: <20170426073247.7441-1-quintela@redhat.com> References: <20170426073247.7441-1-quintela@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.27]); Wed, 26 Apr 2017 07:32:53 +0000 (UTC) X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 209.132.183.28 Subject: [Qemu-devel] [PATCH] ram: Split dirty bitmap by RAMBlock X-BeenThere: qemu-devel@nongnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: lvivier@redhat.com, dgilbert@redhat.com, peterx@redhat.com Errors-To: qemu-devel-bounces+importer=patchew.org@nongnu.org Sender: "Qemu-devel" X-ZohoMail: RSF_0 Z_629925259 SPT_0 Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Both the ram bitmap and the unsent bitmap are split by RAMBlock. Signed-off-by: Juan Quintela -- Fix compilation when DEBUG_POSTCOPY is enabled (thanks Hailiang) Signed-off-by: Juan Quintela Reviewed-by: Peter Xu Reviewed-by: zhanghailiang --- include/exec/ram_addr.h | 13 +- include/migration/migration.h | 3 +- include/migration/postcopy-ram.h | 3 - migration/postcopy-ram.c | 5 +- migration/ram.c | 273 +++++++++++++++--------------------= ---- 5 files changed, 119 insertions(+), 178 deletions(-) diff --git a/include/exec/ram_addr.h b/include/exec/ram_addr.h index 6436a41..c56b35b 100644 --- a/include/exec/ram_addr.h +++ b/include/exec/ram_addr.h @@ -39,6 +39,14 @@ struct RAMBlock { QLIST_HEAD(, RAMBlockNotifier) ramblock_notifiers; int fd; size_t page_size; + /* dirty bitmap used during migration */ + unsigned long *bmap; + /* bitmap of pages that haven't been sent even once + * only maintained and used in postcopy at the moment + * where it's used to send the dirtymap at the start + * of the postcopy phase + */ + unsigned long *unsentmap; }; =20 static inline bool offset_in_ramblock(RAMBlock *b, ram_addr_t offset) @@ -360,16 +368,15 @@ static inline void cpu_physical_memory_clear_dirty_ra= nge(ram_addr_t start, =20 =20 static inline -uint64_t cpu_physical_memory_sync_dirty_bitmap(unsigned long *dest, - RAMBlock *rb, +uint64_t cpu_physical_memory_sync_dirty_bitmap(RAMBlock *rb, ram_addr_t start, ram_addr_t length, uint64_t *real_dirty_pages) { ram_addr_t addr; - start =3D rb->offset + start; unsigned long page =3D BIT_WORD(start >> TARGET_PAGE_BITS); uint64_t num_dirty =3D 0; + unsigned long *dest =3D rb->bmap; =20 /* start address is aligned at the start of a word? */ if (((page * BITS_PER_LONG) << TARGET_PAGE_BITS) =3D=3D start) { diff --git a/include/migration/migration.h b/include/migration/migration.h index ba1a16c..e29cb01 100644 --- a/include/migration/migration.h +++ b/include/migration/migration.h @@ -266,7 +266,8 @@ uint64_t xbzrle_mig_pages_cache_miss(void); double xbzrle_mig_cache_miss_rate(void); =20 void ram_handle_compressed(void *host, uint8_t ch, uint64_t size); -void ram_debug_dump_bitmap(unsigned long *todump, bool expected); +void ram_debug_dump_bitmap(unsigned long *todump, bool expected, + unsigned long pages); /* For outgoing discard bitmap */ int ram_postcopy_send_discard_bitmap(MigrationState *ms); /* For incoming postcopy discard */ diff --git a/include/migration/postcopy-ram.h b/include/migration/postcopy-= ram.h index 8e036b9..4c25f03 100644 --- a/include/migration/postcopy-ram.h +++ b/include/migration/postcopy-ram.h @@ -43,12 +43,9 @@ int postcopy_ram_prepare_discard(MigrationIncomingState = *mis); =20 /* * Called at the start of each RAMBlock by the bitmap code. - * 'offset' is the bitmap offset of the named RAMBlock in the migration - * bitmap. * Returns a new PDS */ PostcopyDiscardState *postcopy_discard_send_init(MigrationState *ms, - unsigned long offset, const char *name); =20 /* diff --git a/migration/postcopy-ram.c b/migration/postcopy-ram.c index 85fd8d7..e3f4a37 100644 --- a/migration/postcopy-ram.c +++ b/migration/postcopy-ram.c @@ -33,7 +33,6 @@ =20 struct PostcopyDiscardState { const char *ramblock_name; - uint64_t offset; /* Bitmap entry for the 1st bit of this RAMBlock */ uint16_t cur_entry; /* * Start and length of a discard range (bytes) @@ -717,14 +716,12 @@ void *postcopy_get_tmp_page(MigrationIncomingState *m= is) * returns: a new PDS. */ PostcopyDiscardState *postcopy_discard_send_init(MigrationState *ms, - unsigned long offset, const char *name) { PostcopyDiscardState *res =3D g_malloc0(sizeof(PostcopyDiscardState)); =20 if (res) { res->ramblock_name =3D name; - res->offset =3D offset; } =20 return res; @@ -745,7 +742,7 @@ void postcopy_discard_send_range(MigrationState *ms, Po= stcopyDiscardState *pds, { size_t tp_size =3D qemu_target_page_size(); /* Convert to byte offsets within the RAM block */ - pds->start_list[pds->cur_entry] =3D (start - pds->offset) * tp_size; + pds->start_list[pds->cur_entry] =3D start * tp_size; pds->length_list[pds->cur_entry] =3D length * tp_size; trace_postcopy_discard_send_range(pds->ramblock_name, start, length); pds->cur_entry++; diff --git a/migration/ram.c b/migration/ram.c index f48664e..235f400 100644 --- a/migration/ram.c +++ b/migration/ram.c @@ -138,19 +138,6 @@ out: return ret; } =20 -struct RAMBitmap { - struct rcu_head rcu; - /* Main migration bitmap */ - unsigned long *bmap; - /* bitmap of pages that haven't been sent even once - * only maintained and used in postcopy at the moment - * where it's used to send the dirtymap at the start - * of the postcopy phase - */ - unsigned long *unsentmap; -}; -typedef struct RAMBitmap RAMBitmap; - /* * An outstanding page request, on the source, having been received * and queued @@ -220,8 +207,6 @@ struct RAMState { uint64_t postcopy_requests; /* protects modification of the bitmap */ QemuMutex bitmap_mutex; - /* Ram Bitmap protected by RCU */ - RAMBitmap *ram_bitmap; /* The RAMBlock used in the last src_page_requests */ RAMBlock *last_req_rb; /* Queue of outstanding page requests from the destination */ @@ -614,22 +599,17 @@ static inline unsigned long migration_bitmap_find_dirty(RAMState *rs, RAMBlock *rb, unsigned long start) { - unsigned long base =3D rb->offset >> TARGET_PAGE_BITS; - unsigned long nr =3D base + start; - uint64_t rb_size =3D rb->used_length; - unsigned long size =3D base + (rb_size >> TARGET_PAGE_BITS); - unsigned long *bitmap; - + unsigned long size =3D rb->used_length >> TARGET_PAGE_BITS; + unsigned long *bitmap =3D rb->bmap; unsigned long next; =20 - bitmap =3D atomic_rcu_read(&rs->ram_bitmap)->bmap; - if (rs->ram_bulk_stage && nr > base) { - next =3D nr + 1; + if (rs->ram_bulk_stage && start > 0) { + next =3D start + 1; } else { - next =3D find_next_bit(bitmap, size, nr); + next =3D find_next_bit(bitmap, size, start); } =20 - return next - base; + return next; } =20 static inline bool migration_bitmap_clear_dirty(RAMState *rs, @@ -637,10 +617,8 @@ static inline bool migration_bitmap_clear_dirty(RAMSta= te *rs, unsigned long page) { bool ret; - unsigned long *bitmap =3D atomic_rcu_read(&rs->ram_bitmap)->bmap; - unsigned long nr =3D (rb->offset >> TARGET_PAGE_BITS) + page; =20 - ret =3D test_and_clear_bit(nr, bitmap); + ret =3D test_and_clear_bit(page, rb->bmap); =20 if (ret) { rs->migration_dirty_pages--; @@ -651,10 +629,8 @@ static inline bool migration_bitmap_clear_dirty(RAMSta= te *rs, static void migration_bitmap_sync_range(RAMState *rs, RAMBlock *rb, ram_addr_t start, ram_addr_t lengt= h) { - unsigned long *bitmap; - bitmap =3D atomic_rcu_read(&rs->ram_bitmap)->bmap; rs->migration_dirty_pages +=3D - cpu_physical_memory_sync_dirty_bitmap(bitmap, rb, start, length, + cpu_physical_memory_sync_dirty_bitmap(rb, start, length, &rs->num_dirty_pages_period); } =20 @@ -1153,17 +1129,13 @@ static bool get_queued_page(RAMState *rs, PageSearc= hStatus *pss) * search already sent it. */ if (block) { - unsigned long *bitmap; unsigned long page; =20 - bitmap =3D atomic_rcu_read(&rs->ram_bitmap)->bmap; - page =3D (block->offset + offset) >> TARGET_PAGE_BITS; - dirty =3D test_bit(page, bitmap); + page =3D offset >> TARGET_PAGE_BITS; + dirty =3D test_bit(page, block->bmap); if (!dirty) { trace_get_queued_page_not_dirty(block->idstr, (uint64_t)of= fset, - page, - test_bit(page, - atomic_rcu_read(&rs->ram_bitmap)->unsentmap)); + page, test_bit(page, block->unsentmap)); } else { trace_get_queued_page(block->idstr, (uint64_t)offset, page= ); } @@ -1301,16 +1273,13 @@ static int ram_save_target_page(RAMState *rs, PageS= earchStatus *pss, =20 /* Check the pages is dirty and if it is send it */ if (migration_bitmap_clear_dirty(rs, pss->block, pss->page)) { - unsigned long *unsentmap; /* * If xbzrle is on, stop using the data compression after first * round of migration even if compression is enabled. In theory, * xbzrle can do better than compression. */ - unsigned long page =3D - (pss->block->offset >> TARGET_PAGE_BITS) + pss->page; - if (migrate_use_compression() - && (rs->ram_bulk_stage || !migrate_use_xbzrle())) { + if (migrate_use_compression() && + (rs->ram_bulk_stage || !migrate_use_xbzrle())) { res =3D ram_save_compressed_page(rs, pss, last_stage); } else { res =3D ram_save_page(rs, pss, last_stage); @@ -1319,9 +1288,8 @@ static int ram_save_target_page(RAMState *rs, PageSea= rchStatus *pss, if (res < 0) { return res; } - unsentmap =3D atomic_rcu_read(&rs->ram_bitmap)->unsentmap; - if (unsentmap) { - clear_bit(page, unsentmap); + if (pss->block->unsentmap) { + clear_bit(pss->page, pss->block->unsentmap); } } =20 @@ -1451,25 +1419,20 @@ void free_xbzrle_decoded_buf(void) xbzrle_decoded_buf =3D NULL; } =20 -static void migration_bitmap_free(RAMBitmap *bmap) -{ - g_free(bmap->bmap); - g_free(bmap->unsentmap); - g_free(bmap); -} - static void ram_migration_cleanup(void *opaque) { - RAMState *rs =3D opaque; + RAMBlock *block; =20 /* caller have hold iothread lock or is in a bh, so there is * no writing race against this migration_bitmap */ - RAMBitmap *bitmap =3D rs->ram_bitmap; - atomic_rcu_set(&rs->ram_bitmap, NULL); - if (bitmap) { - memory_global_dirty_log_stop(); - call_rcu(bitmap, migration_bitmap_free, rcu); + memory_global_dirty_log_stop(); + + QLIST_FOREACH_RCU(block, &ram_list.blocks, next) { + g_free(block->bmap); + block->bmap =3D NULL; + g_free(block->unsentmap); + block->unsentmap =3D NULL; } =20 XBZRLE_cache_lock(); @@ -1501,27 +1464,22 @@ static void ram_state_reset(RAMState *rs) * of; it won't bother printing lines that are all this value. * If 'todump' is null the migration bitmap is dumped. */ -void ram_debug_dump_bitmap(unsigned long *todump, bool expected) +void ram_debug_dump_bitmap(unsigned long *todump, bool expected, + unsigned long pages) { - unsigned long ram_pages =3D last_ram_page(); - RAMState *rs =3D &ram_state; int64_t cur; int64_t linelen =3D 128; char linebuf[129]; =20 - if (!todump) { - todump =3D atomic_rcu_read(&rs->ram_bitmap)->bmap; - } - - for (cur =3D 0; cur < ram_pages; cur +=3D linelen) { + for (cur =3D 0; cur < pages; cur +=3D linelen) { int64_t curb; bool found =3D false; /* * Last line; catch the case where the line length * is longer than remaining ram */ - if (cur + linelen > ram_pages) { - linelen =3D ram_pages - cur; + if (cur + linelen > pages) { + linelen =3D pages - cur; } for (curb =3D 0; curb < linelen; curb++) { bool thisbit =3D test_bit(cur + curb, todump); @@ -1539,14 +1497,12 @@ void ram_debug_dump_bitmap(unsigned long *todump, b= ool expected) =20 void ram_postcopy_migrated_memory_release(MigrationState *ms) { - RAMState *rs =3D &ram_state; struct RAMBlock *block; - unsigned long *bitmap =3D atomic_rcu_read(&rs->ram_bitmap)->bmap; =20 QLIST_FOREACH_RCU(block, &ram_list.blocks, next) { - unsigned long first =3D block->offset >> TARGET_PAGE_BITS; - unsigned long range =3D first + (block->used_length >> TARGET_PAGE= _BITS); - unsigned long run_start =3D find_next_zero_bit(bitmap, range, firs= t); + unsigned long *bitmap =3D block->bmap; + unsigned long range =3D block->used_length >> TARGET_PAGE_BITS; + unsigned long run_start =3D find_next_zero_bit(bitmap, range, 0); =20 while (run_start < range) { unsigned long run_end =3D find_next_bit(bitmap, range, run_sta= rt + 1); @@ -1573,16 +1529,13 @@ void ram_postcopy_migrated_memory_release(Migration= State *ms) */ static int postcopy_send_discard_bm_ram(MigrationState *ms, PostcopyDiscardState *pds, - unsigned long start, - unsigned long length) + RAMBlock *block) { - RAMState *rs =3D &ram_state; - unsigned long end =3D start + length; /* one after the end */ + unsigned long end =3D block->used_length >> TARGET_PAGE_BITS; unsigned long current; - unsigned long *unsentmap; + unsigned long *unsentmap =3D block->unsentmap; =20 - unsentmap =3D atomic_rcu_read(&rs->ram_bitmap)->unsentmap; - for (current =3D start; current < end; ) { + for (current =3D 0; current < end; ) { unsigned long one =3D find_next_bit(unsentmap, end, current); =20 if (one <=3D end) { @@ -1625,18 +1578,15 @@ static int postcopy_each_ram_send_discard(Migration= State *ms) int ret; =20 QLIST_FOREACH_RCU(block, &ram_list.blocks, next) { - unsigned long first =3D block->offset >> TARGET_PAGE_BITS; - PostcopyDiscardState *pds =3D postcopy_discard_send_init(ms, - first, - block->idst= r); + PostcopyDiscardState *pds =3D + postcopy_discard_send_init(ms, block->idstr); =20 /* * Postcopy sends chunks of bitmap over the wire, but it * just needs indexes at this point, avoids it having * target page specific code. */ - ret =3D postcopy_send_discard_bm_ram(ms, pds, first, - block->used_length >> TARGET_PAGE_BITS= ); + ret =3D postcopy_send_discard_bm_ram(ms, pds, block); postcopy_discard_send_finish(ms, pds); if (ret) { return ret; @@ -1667,12 +1617,10 @@ static void postcopy_chunk_hostpages_pass(Migration= State *ms, bool unsent_pass, PostcopyDiscardState *pds) { RAMState *rs =3D &ram_state; - unsigned long *bitmap; - unsigned long *unsentmap; + unsigned long *bitmap =3D block->bmap; + unsigned long *unsentmap =3D block->unsentmap; unsigned int host_ratio =3D block->page_size / TARGET_PAGE_SIZE; - unsigned long first =3D block->offset >> TARGET_PAGE_BITS; - unsigned long len =3D block->used_length >> TARGET_PAGE_BITS; - unsigned long last =3D first + (len - 1); + unsigned long pages =3D block->used_length >> TARGET_PAGE_BITS; unsigned long run_start; =20 if (block->page_size =3D=3D TARGET_PAGE_SIZE) { @@ -1680,18 +1628,15 @@ static void postcopy_chunk_hostpages_pass(Migration= State *ms, bool unsent_pass, return; } =20 - bitmap =3D atomic_rcu_read(&rs->ram_bitmap)->bmap; - unsentmap =3D atomic_rcu_read(&rs->ram_bitmap)->unsentmap; - if (unsent_pass) { /* Find a sent page */ - run_start =3D find_next_zero_bit(unsentmap, last + 1, first); + run_start =3D find_next_zero_bit(unsentmap, pages, 0); } else { /* Find a dirty page */ - run_start =3D find_next_bit(bitmap, last + 1, first); + run_start =3D find_next_bit(bitmap, pages, 0); } =20 - while (run_start <=3D last) { + while (run_start < pages) { bool do_fixup =3D false; unsigned long fixup_start_addr; unsigned long host_offset; @@ -1711,9 +1656,9 @@ static void postcopy_chunk_hostpages_pass(MigrationSt= ate *ms, bool unsent_pass, /* Find the end of this run */ unsigned long run_end; if (unsent_pass) { - run_end =3D find_next_bit(unsentmap, last + 1, run_start += 1); + run_end =3D find_next_bit(unsentmap, pages, run_start + 1); } else { - run_end =3D find_next_zero_bit(bitmap, last + 1, run_start= + 1); + run_end =3D find_next_zero_bit(bitmap, pages, run_start + = 1); } /* * If the end isn't at the start of a host page, then the @@ -1770,11 +1715,10 @@ static void postcopy_chunk_hostpages_pass(Migration= State *ms, bool unsent_pass, =20 if (unsent_pass) { /* Find the next sent page for the next iteration */ - run_start =3D find_next_zero_bit(unsentmap, last + 1, - run_start); + run_start =3D find_next_zero_bit(unsentmap, pages, run_start); } else { /* Find the next dirty page for the next iteration */ - run_start =3D find_next_bit(bitmap, last + 1, run_start); + run_start =3D find_next_bit(bitmap, pages, run_start); } } } @@ -1791,34 +1735,22 @@ static void postcopy_chunk_hostpages_pass(Migration= State *ms, bool unsent_pass, * Returns zero on success * * @ms: current migration state + * @block: block we want to work with */ -static int postcopy_chunk_hostpages(MigrationState *ms) +static int postcopy_chunk_hostpages(MigrationState *ms, RAMBlock *block) { - RAMState *rs =3D &ram_state; - struct RAMBlock *block; - - /* Easiest way to make sure we don't resume in the middle of a host-pa= ge */ - rs->last_seen_block =3D NULL; - rs->last_sent_block =3D NULL; - rs->last_page =3D 0; - - QLIST_FOREACH_RCU(block, &ram_list.blocks, next) { - unsigned long first =3D block->offset >> TARGET_PAGE_BITS; - - PostcopyDiscardState *pds =3D - postcopy_discard_send_init(ms, first, block->idst= r); - - /* First pass: Discard all partially sent host pages */ - postcopy_chunk_hostpages_pass(ms, true, block, pds); - /* - * Second pass: Ensure that all partially dirty host pages are made - * fully dirty. - */ - postcopy_chunk_hostpages_pass(ms, false, block, pds); - - postcopy_discard_send_finish(ms, pds); - } /* ram_list loop */ - + PostcopyDiscardState *pds =3D + postcopy_discard_send_init(ms, block->idstr); + + /* First pass: Discard all partially sent host pages */ + postcopy_chunk_hostpages_pass(ms, true, block, pds); + /* + * Second pass: Ensure that all partially dirty host pages are made + * fully dirty. + */ + postcopy_chunk_hostpages_pass(ms, false, block, pds); + + postcopy_discard_send_finish(ms, pds); return 0; } =20 @@ -1836,47 +1768,53 @@ static int postcopy_chunk_hostpages(MigrationState = *ms) * Hopefully this is pretty sparse * * @ms: current migration state - */ + * @block: block that contains the page we want to canonicalize */ int ram_postcopy_send_discard_bitmap(MigrationState *ms) { RAMState *rs =3D &ram_state; + RAMBlock *block; int ret; - unsigned long *bitmap, *unsentmap; =20 rcu_read_lock(); =20 /* This should be our last sync, the src is now paused */ migration_bitmap_sync(rs); =20 - unsentmap =3D atomic_rcu_read(&rs->ram_bitmap)->unsentmap; - if (!unsentmap) { - /* We don't have a safe way to resize the sentmap, so - * if the bitmap was resized it will be NULL at this - * point. + /* Easiest way to make sure we don't resume in the middle of a host-pa= ge */ + rs->last_seen_block =3D NULL; + rs->last_sent_block =3D NULL; + rs->last_page =3D 0; + + QLIST_FOREACH_RCU(block, &ram_list.blocks, next) { + unsigned long pages =3D block->used_length >> TARGET_PAGE_BITS; + unsigned long *bitmap =3D block->bmap; + unsigned long *unsentmap =3D block->unsentmap; + + if (!unsentmap) { + /* We don't have a safe way to resize the sentmap, so + * if the bitmap was resized it will be NULL at this + * point. + */ + error_report("migration ram resized during precopy phase"); + rcu_read_unlock(); + return -EINVAL; + } + /* Deal with TPS !=3D HPS and huge pages */ + ret =3D postcopy_chunk_hostpages(ms, block); + if (ret) { + rcu_read_unlock(); + return ret; + } + + /* + * Update the unsentmap to be unsentmap =3D unsentmap | dirty */ - error_report("migration ram resized during precopy phase"); - rcu_read_unlock(); - return -EINVAL; - } - - /* Deal with TPS !=3D HPS and huge pages */ - ret =3D postcopy_chunk_hostpages(ms); - if (ret) { - rcu_read_unlock(); - return ret; - } - - /* - * Update the unsentmap to be unsentmap =3D unsentmap | dirty - */ - bitmap =3D atomic_rcu_read(&rs->ram_bitmap)->bmap; - bitmap_or(unsentmap, unsentmap, bitmap, last_ram_page()); - - - trace_ram_postcopy_send_discard_bitmap(); + bitmap_or(unsentmap, unsentmap, bitmap, pages); #ifdef DEBUG_POSTCOPY - ram_debug_dump_bitmap(unsentmap, true); + ram_debug_dump_bitmap(unsentmap, true, pages); #endif + } + trace_ram_postcopy_send_discard_bitmap(); =20 ret =3D postcopy_each_ram_send_discard(ms); rcu_read_unlock(); @@ -1918,8 +1856,6 @@ err: =20 static int ram_state_init(RAMState *rs) { - unsigned long ram_bitmap_pages; - memset(rs, 0, sizeof(*rs)); qemu_mutex_init(&rs->bitmap_mutex); qemu_mutex_init(&rs->src_page_req_mutex); @@ -1961,16 +1897,19 @@ static int ram_state_init(RAMState *rs) rcu_read_lock(); ram_state_reset(rs); =20 - rs->ram_bitmap =3D g_new0(RAMBitmap, 1); /* Skip setting bitmap if there is no RAM */ if (ram_bytes_total()) { - ram_bitmap_pages =3D last_ram_page(); - rs->ram_bitmap->bmap =3D bitmap_new(ram_bitmap_pages); - bitmap_set(rs->ram_bitmap->bmap, 0, ram_bitmap_pages); + RAMBlock *block; =20 - if (migrate_postcopy_ram()) { - rs->ram_bitmap->unsentmap =3D bitmap_new(ram_bitmap_pages); - bitmap_set(rs->ram_bitmap->unsentmap, 0, ram_bitmap_pages); + QLIST_FOREACH_RCU(block, &ram_list.blocks, next) { + unsigned long pages =3D block->max_length >> TARGET_PAGE_BITS; + + block->bmap =3D bitmap_new(pages); + bitmap_set(block->bmap, 0, pages); + if (migrate_postcopy_ram()) { + block->unsentmap =3D bitmap_new(pages); + bitmap_set(block->unsentmap, 0, pages); + } } } =20 --=20 2.9.3