From: Nicholas Piggin <npiggin@gmail.com>
The fastpath in cpu_physical_memory_sync_dirty_bitmap() to test large
aligned ranges forgot to bring the TCG TLB up to date after clearing
some of the dirty memory bitmap bits. This can result in stores though
the TCG TLB not setting the dirty memory bitmap and ultimately causes
memory corruption / lost updates during migration from a TCG host.
Fix this by calling cpu_physical_memory_dirty_bits_cleared() when
dirty bits have been cleared.
Fixes: aa8dc044772 ("migration: synchronize memory bitmap 64bits at a time")
Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
Tested-by: Thomas Huth <thuth@redhat.com>
Message-ID: <20240219061731.232570-1-npiggin@gmail.com>
[PMD: Split patch in 2: part 2/2, slightly adapt description]
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
---
include/exec/ram_addr.h | 3 +++
1 file changed, 3 insertions(+)
diff --git a/include/exec/ram_addr.h b/include/exec/ram_addr.h
index b060ea9176..de45ba7bc9 100644
--- a/include/exec/ram_addr.h
+++ b/include/exec/ram_addr.h
@@ -513,6 +513,9 @@ uint64_t cpu_physical_memory_sync_dirty_bitmap(RAMBlock *rb,
idx++;
}
}
+ if (num_dirty) {
+ cpu_physical_memory_dirty_bits_cleared(start, length);
+ }
if (rb->clear_bmap) {
/*
--
2.41.0