accel/kvm/kvm-all.c | 114 ++++++++++++++++++++++++++++++---------------------- 1 file changed, 66 insertions(+), 48 deletions(-)
A single ram_addr (representing a host-virtual address) could be aliased to multiple guest physical addresses. Since the KVM dirty page reporting works on guest physical addresses, we need to clear all of the aliases when a page is migrated, or there is a risk of losing writes to the aliases that were not cleared. Paolo Paolo Bonzini (2): kvm: extract kvm_log_clear_one_slot kvm: clear dirty bitmaps from all overlapping memslots accel/kvm/kvm-all.c | 114 ++++++++++++++++++++++++++++++---------------------- 1 file changed, 66 insertions(+), 48 deletions(-) -- 1.8.3.1
On Fri, Sep 20, 2019 at 12:21:20PM +0200, Paolo Bonzini wrote: > A single ram_addr (representing a host-virtual address) could be aliased > to multiple guest physical addresses. Since the KVM dirty page reporting > works on guest physical addresses, we need to clear all of the aliases > when a page is migrated, or there is a risk of losing writes to the > aliases that were not cleared. (CCing Igor too so Igor would be aware of these changes that might conflict with the recent memslot split work) -- Peter Xu
On Fri, 20 Sep 2019 20:19:51 +0800 Peter Xu <peterx@redhat.com> wrote: > On Fri, Sep 20, 2019 at 12:21:20PM +0200, Paolo Bonzini wrote: > > A single ram_addr (representing a host-virtual address) could be aliased > > to multiple guest physical addresses. Since the KVM dirty page reporting > > works on guest physical addresses, we need to clear all of the aliases > > when a page is migrated, or there is a risk of losing writes to the > > aliases that were not cleared. > > (CCing Igor too so Igor would be aware of these changes that might > conflict with the recent memslot split work) > Thanks Peter, I'll rebase on top of this series and do some more testing
On Fri, Sep 20, 2019 at 03:58:51PM +0200, Igor Mammedov wrote: > On Fri, 20 Sep 2019 20:19:51 +0800 > Peter Xu <peterx@redhat.com> wrote: > > > On Fri, Sep 20, 2019 at 12:21:20PM +0200, Paolo Bonzini wrote: > > > A single ram_addr (representing a host-virtual address) could be aliased > > > to multiple guest physical addresses. Since the KVM dirty page reporting > > > works on guest physical addresses, we need to clear all of the aliases > > > when a page is migrated, or there is a risk of losing writes to the > > > aliases that were not cleared. > > > > (CCing Igor too so Igor would be aware of these changes that might > > conflict with the recent memslot split work) > > > > Thanks Peter, > I'll rebase on top of this series and do some more testing Igor, It turns out that this series is probably not required for the current tree because memory_region_clear_dirty_bitmap() should have handled the aliasing issue correctly, but then this patchset will be a pre-requisite of your split series because when we split memory slots it starts to be possible that log_clear() will be applied to multiple kvm memslots. Would you like to pick these two patches directly into your series? The 1st paragraph in the 2nd patch could probably be inaccurate and need amending (as mentioned). Thanks, -- Peter Xu
On Mon, 23 Sep 2019 09:29:46 +0800 Peter Xu <peterx@redhat.com> wrote: > On Fri, Sep 20, 2019 at 03:58:51PM +0200, Igor Mammedov wrote: > > On Fri, 20 Sep 2019 20:19:51 +0800 > > Peter Xu <peterx@redhat.com> wrote: > > > > > On Fri, Sep 20, 2019 at 12:21:20PM +0200, Paolo Bonzini wrote: > > > > A single ram_addr (representing a host-virtual address) could be aliased > > > > to multiple guest physical addresses. Since the KVM dirty page reporting > > > > works on guest physical addresses, we need to clear all of the aliases > > > > when a page is migrated, or there is a risk of losing writes to the > > > > aliases that were not cleared. > > > > > > (CCing Igor too so Igor would be aware of these changes that might > > > conflict with the recent memslot split work) > > > > > > > Thanks Peter, > > I'll rebase on top of this series and do some more testing > > Igor, > > It turns out that this series is probably not required for the current > tree because memory_region_clear_dirty_bitmap() should have handled > the aliasing issue correctly, but then this patchset will be a > pre-requisite of your split series because when we split memory slots > it starts to be possible that log_clear() will be applied to multiple > kvm memslots. > > Would you like to pick these two patches directly into your series? > The 1st paragraph in the 2nd patch could probably be inaccurate and > need amending (as mentioned). Yep, commit message doesn't fit patch, how about following description: " Currently MemoryRegionSection has 1:1 mapping to KVMSlot. However next patch will allow splitting MemoryRegionSection into several KVMSlot-s, make sure that kvm_physical_log_slot_clear() is able to handle such 1:N mapping. " > > Thanks, >
On 23/09/19 18:15, Igor Mammedov wrote: > Yep, commit message doesn't fit patch, how about following description: > " > Currently MemoryRegionSection has 1:1 mapping to KVMSlot. > However next patch will allow splitting MemoryRegionSection into > several KVMSlot-s, make sure that kvm_physical_log_slot_clear() > is able to handle such 1:N mapping. > " Yes, that's great. Paolo
On Mon, Sep 23, 2019 at 06:49:12PM +0200, Paolo Bonzini wrote: > On 23/09/19 18:15, Igor Mammedov wrote: > > Yep, commit message doesn't fit patch, how about following description: > > " > > Currently MemoryRegionSection has 1:1 mapping to KVMSlot. > > However next patch will allow splitting MemoryRegionSection into > > several KVMSlot-s, make sure that kvm_physical_log_slot_clear() > > is able to handle such 1:N mapping. > > " > > Yes, that's great. Please feel free to add my r-b directly on patch 2 with that amended. Thanks, -- Peter Xu
© 2016 - 2024 Red Hat, Inc.