In cpu_physical_memory_sync_dirty_bitmap(rb, start, ...), the 2nd
argument 'start' is relative to the start of the ramblock 'rb'. When
it's used to access the dirty memory bitmap of ram_list (i.e.
ram_list.dirty_memory[DIRTY_MEMORY_MIGRATION]->blocks[]), an offset to
the start of all RAM (i.e. rb->offset) should be added to it, which has
however been missed since c/s 6b6712efcc. For a ramblock of host memory
backend whose offset is not zero, cpu_physical_memory_sync_dirty_bitmap()
synchronizes the incorrect part of the dirty memory bitmap of ram_list
to the per ramblock dirty bitmap. As a result, a guest with host
memory backend may crash after migration.
Fix it by adding the offset of ramblock when accessing the dirty memory
bitmap of ram_list in cpu_physical_memory_sync_dirty_bitmap().
Reported-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Haozhong Zhang <haozhong.zhang@intel.com>
---
Changes in v2:
* Avoid shadowing variable 'offset'. (Paolo)
---
include/exec/ram_addr.h | 9 ++++++---
1 file changed, 6 insertions(+), 3 deletions(-)
diff --git a/include/exec/ram_addr.h b/include/exec/ram_addr.h
index 73d1bea8b6..c04f4f67f6 100644
--- a/include/exec/ram_addr.h
+++ b/include/exec/ram_addr.h
@@ -386,8 +386,9 @@ uint64_t cpu_physical_memory_sync_dirty_bitmap(RAMBlock *rb,
int k;
int nr = BITS_TO_LONGS(length >> TARGET_PAGE_BITS);
unsigned long * const *src;
- unsigned long idx = (page * BITS_PER_LONG) / DIRTY_MEMORY_BLOCK_SIZE;
- unsigned long offset = BIT_WORD((page * BITS_PER_LONG) %
+ unsigned long word = BIT_WORD((start + rb->offset) >> TARGET_PAGE_BITS);
+ unsigned long idx = (word * BITS_PER_LONG) / DIRTY_MEMORY_BLOCK_SIZE;
+ unsigned long offset = BIT_WORD((word * BITS_PER_LONG) %
DIRTY_MEMORY_BLOCK_SIZE);
rcu_read_lock();
@@ -414,9 +415,11 @@ uint64_t cpu_physical_memory_sync_dirty_bitmap(RAMBlock *rb,
rcu_read_unlock();
} else {
+ ram_addr_t offset = rb->offset;
+
for (addr = 0; addr < length; addr += TARGET_PAGE_SIZE) {
if (cpu_physical_memory_test_and_clear_dirty(
- start + addr,
+ start + addr + offset,
TARGET_PAGE_SIZE,
DIRTY_MEMORY_MIGRATION)) {
*real_dirty_pages += 1;
--
2.11.0
On Wed, Jun 28, 2017 at 04:37:04PM +0800, Haozhong Zhang wrote: > In cpu_physical_memory_sync_dirty_bitmap(rb, start, ...), the 2nd > argument 'start' is relative to the start of the ramblock 'rb'. When > it's used to access the dirty memory bitmap of ram_list (i.e. > ram_list.dirty_memory[DIRTY_MEMORY_MIGRATION]->blocks[]), an offset to > the start of all RAM (i.e. rb->offset) should be added to it, which has > however been missed since c/s 6b6712efcc. For a ramblock of host memory > backend whose offset is not zero, cpu_physical_memory_sync_dirty_bitmap() > synchronizes the incorrect part of the dirty memory bitmap of ram_list > to the per ramblock dirty bitmap. As a result, a guest with host > memory backend may crash after migration. > > Fix it by adding the offset of ramblock when accessing the dirty memory > bitmap of ram_list in cpu_physical_memory_sync_dirty_bitmap(). > > Reported-by: Stefan Hajnoczi <stefanha@redhat.com> > Signed-off-by: Haozhong Zhang <haozhong.zhang@intel.com> > --- > Changes in v2: > * Avoid shadowing variable 'offset'. (Paolo) > --- > include/exec/ram_addr.h | 9 ++++++--- > 1 file changed, 6 insertions(+), 3 deletions(-) Tested-by: Stefan Hajnoczi <stefanha@redhat.com>
On 28/06/2017 10:37, Haozhong Zhang wrote: > In cpu_physical_memory_sync_dirty_bitmap(rb, start, ...), the 2nd > argument 'start' is relative to the start of the ramblock 'rb'. When > it's used to access the dirty memory bitmap of ram_list (i.e. > ram_list.dirty_memory[DIRTY_MEMORY_MIGRATION]->blocks[]), an offset to > the start of all RAM (i.e. rb->offset) should be added to it, which has > however been missed since c/s 6b6712efcc. For a ramblock of host memory > backend whose offset is not zero, cpu_physical_memory_sync_dirty_bitmap() > synchronizes the incorrect part of the dirty memory bitmap of ram_list > to the per ramblock dirty bitmap. As a result, a guest with host > memory backend may crash after migration. > > Fix it by adding the offset of ramblock when accessing the dirty memory > bitmap of ram_list in cpu_physical_memory_sync_dirty_bitmap(). > > Reported-by: Stefan Hajnoczi <stefanha@redhat.com> > Signed-off-by: Haozhong Zhang <haozhong.zhang@intel.com> > --- > Changes in v2: > * Avoid shadowing variable 'offset'. (Paolo) > --- > include/exec/ram_addr.h | 9 ++++++--- > 1 file changed, 6 insertions(+), 3 deletions(-) > > diff --git a/include/exec/ram_addr.h b/include/exec/ram_addr.h > index 73d1bea8b6..c04f4f67f6 100644 > --- a/include/exec/ram_addr.h > +++ b/include/exec/ram_addr.h > @@ -386,8 +386,9 @@ uint64_t cpu_physical_memory_sync_dirty_bitmap(RAMBlock *rb, > int k; > int nr = BITS_TO_LONGS(length >> TARGET_PAGE_BITS); > unsigned long * const *src; > - unsigned long idx = (page * BITS_PER_LONG) / DIRTY_MEMORY_BLOCK_SIZE; > - unsigned long offset = BIT_WORD((page * BITS_PER_LONG) % > + unsigned long word = BIT_WORD((start + rb->offset) >> TARGET_PAGE_BITS); > + unsigned long idx = (word * BITS_PER_LONG) / DIRTY_MEMORY_BLOCK_SIZE; > + unsigned long offset = BIT_WORD((word * BITS_PER_LONG) % > DIRTY_MEMORY_BLOCK_SIZE); > > rcu_read_lock(); > @@ -414,9 +415,11 @@ uint64_t cpu_physical_memory_sync_dirty_bitmap(RAMBlock *rb, > > rcu_read_unlock(); > } else { > + ram_addr_t offset = rb->offset; > + > for (addr = 0; addr < length; addr += TARGET_PAGE_SIZE) { > if (cpu_physical_memory_test_and_clear_dirty( > - start + addr, > + start + addr + offset, > TARGET_PAGE_SIZE, > DIRTY_MEMORY_MIGRATION)) { > *real_dirty_pages += 1; > Acked-by: Paolo Bonzini <pbonzini@redhat.com> Juan, please take care of this yourself! Thanks, Paolo
Paolo Bonzini <pbonzini@redhat.com> wrote: > On 28/06/2017 10:37, Haozhong Zhang wrote: >> In cpu_physical_memory_sync_dirty_bitmap(rb, start, ...), the 2nd >> argument 'start' is relative to the start of the ramblock 'rb'. When >> it's used to access the dirty memory bitmap of ram_list (i.e. >> ram_list.dirty_memory[DIRTY_MEMORY_MIGRATION]->blocks[]), an offset to >> the start of all RAM (i.e. rb->offset) should be added to it, which has >> however been missed since c/s 6b6712efcc. For a ramblock of host memory >> backend whose offset is not zero, cpu_physical_memory_sync_dirty_bitmap() >> synchronizes the incorrect part of the dirty memory bitmap of ram_list >> to the per ramblock dirty bitmap. As a result, a guest with host >> memory backend may crash after migration. >> >> Fix it by adding the offset of ramblock when accessing the dirty memory >> bitmap of ram_list in cpu_physical_memory_sync_dirty_bitmap(). >> >> Reported-by: Stefan Hajnoczi <stefanha@redhat.com> >> Signed-off-by: Haozhong Zhang <haozhong.zhang@intel.com> >> --- >> Changes in v2: >> * Avoid shadowing variable 'offset'. (Paolo) >> --- >> include/exec/ram_addr.h | 9 ++++++--- >> 1 file changed, 6 insertions(+), 3 deletions(-) >> >> diff --git a/include/exec/ram_addr.h b/include/exec/ram_addr.h >> index 73d1bea8b6..c04f4f67f6 100644 >> --- a/include/exec/ram_addr.h >> +++ b/include/exec/ram_addr.h >> @@ -386,8 +386,9 @@ uint64_t cpu_physical_memory_sync_dirty_bitmap(RAMBlock *rb, >> int k; >> int nr = BITS_TO_LONGS(length >> TARGET_PAGE_BITS); >> unsigned long * const *src; >> - unsigned long idx = (page * BITS_PER_LONG) / DIRTY_MEMORY_BLOCK_SIZE; >> - unsigned long offset = BIT_WORD((page * BITS_PER_LONG) % >> + unsigned long word = BIT_WORD((start + rb->offset) >> TARGET_PAGE_BITS); >> + unsigned long idx = (word * BITS_PER_LONG) / DIRTY_MEMORY_BLOCK_SIZE; >> + unsigned long offset = BIT_WORD((word * BITS_PER_LONG) % >> DIRTY_MEMORY_BLOCK_SIZE); >> >> rcu_read_lock(); >> @@ -414,9 +415,11 @@ uint64_t cpu_physical_memory_sync_dirty_bitmap(RAMBlock *rb, >> >> rcu_read_unlock(); >> } else { >> + ram_addr_t offset = rb->offset; >> + >> for (addr = 0; addr < length; addr += TARGET_PAGE_SIZE) { >> if (cpu_physical_memory_test_and_clear_dirty( >> - start + addr, >> + start + addr + offset, >> TARGET_PAGE_SIZE, >> DIRTY_MEMORY_MIGRATION)) { >> *real_dirty_pages += 1; >> > > Acked-by: Paolo Bonzini <pbonzini@redhat.com> > > Juan, please take care of this yourself! Thanks, Already sent in a pull request 5 mins ago. (so, it don't have your Ack :-) Thanks, Juan.
© 2016 - 2024 Red Hat, Inc.