Let's factor out prefaulting/populating to make further changes easier to
review. While at it, use the actual page size of the ramblock, which
defaults to qemu_real_host_page_size for anonymous memory.
Signed-off-by: David Hildenbrand <david@redhat.com>
---
migration/ram.c | 21 ++++++++++++---------
1 file changed, 12 insertions(+), 9 deletions(-)
diff --git a/migration/ram.c b/migration/ram.c
index e1c158dc92..de47650c90 100644
--- a/migration/ram.c
+++ b/migration/ram.c
@@ -1639,6 +1639,17 @@ out:
return ret;
}
+static inline void populate_range(RAMBlock *block, ram_addr_t offset,
+ ram_addr_t size)
+{
+ for (; offset < size; offset += block->page_size) {
+ char tmp = *((char *)block->host + offset);
+
+ /* Don't optimize the read out */
+ asm volatile("" : "+r" (tmp));
+ }
+}
+
/*
* ram_block_populate_pages: populate memory in the RAM block by reading
* an integer from the beginning of each page.
@@ -1650,15 +1661,7 @@ out:
*/
static void ram_block_populate_pages(RAMBlock *block)
{
- char *ptr = (char *) block->host;
-
- for (ram_addr_t offset = 0; offset < block->used_length;
- offset += qemu_real_host_page_size) {
- char tmp = *(ptr + offset);
-
- /* Don't optimize the read out */
- asm volatile("" : "+r" (tmp));
- }
+ populate_range(block, 0, block->used_length);
}
/*
--
2.31.1
On Thu, Sep 02, 2021 at 03:14:31PM +0200, David Hildenbrand wrote:
> Let's factor out prefaulting/populating to make further changes easier to
> review. While at it, use the actual page size of the ramblock, which
> defaults to qemu_real_host_page_size for anonymous memory.
>
> Signed-off-by: David Hildenbrand <david@redhat.com>
> ---
> migration/ram.c | 21 ++++++++++++---------
> 1 file changed, 12 insertions(+), 9 deletions(-)
>
> diff --git a/migration/ram.c b/migration/ram.c
> index e1c158dc92..de47650c90 100644
> --- a/migration/ram.c
> +++ b/migration/ram.c
> @@ -1639,6 +1639,17 @@ out:
> return ret;
> }
>
> +static inline void populate_range(RAMBlock *block, ram_addr_t offset,
> + ram_addr_t size)
> +{
> + for (; offset < size; offset += block->page_size) {
> + char tmp = *((char *)block->host + offset);
> +
> + /* Don't optimize the read out */
> + asm volatile("" : "+r" (tmp));
> + }
> +}
If to make it a common function, make it populate_range_read()?
Just to identify from RW, as we'll fill the holes with zero pages only, not
doing page allocations yet, so not a complete "populate".
That'll be good enough for live snapshot as uffd-wp works for zero pages,
however I'm just afraid it may stop working for some new users of it when zero
pages won't suffice.
Maybe some comment would help too?
--
Peter Xu
On 03.09.21 00:28, Peter Xu wrote:
> On Thu, Sep 02, 2021 at 03:14:31PM +0200, David Hildenbrand wrote:
>> Let's factor out prefaulting/populating to make further changes easier to
>> review. While at it, use the actual page size of the ramblock, which
>> defaults to qemu_real_host_page_size for anonymous memory.
>>
>> Signed-off-by: David Hildenbrand <david@redhat.com>
>> ---
>> migration/ram.c | 21 ++++++++++++---------
>> 1 file changed, 12 insertions(+), 9 deletions(-)
>>
>> diff --git a/migration/ram.c b/migration/ram.c
>> index e1c158dc92..de47650c90 100644
>> --- a/migration/ram.c
>> +++ b/migration/ram.c
>> @@ -1639,6 +1639,17 @@ out:
>> return ret;
>> }
>>
>> +static inline void populate_range(RAMBlock *block, ram_addr_t offset,
>> + ram_addr_t size)
>> +{
>> + for (; offset < size; offset += block->page_size) {
>> + char tmp = *((char *)block->host + offset);
>> +
>> + /* Don't optimize the read out */
>> + asm volatile("" : "+r" (tmp));
>> + }
>> +}
>
> If to make it a common function, make it populate_range_read()?
Indeed, makes sense.
>
> Just to identify from RW, as we'll fill the holes with zero pages only, not
> doing page allocations yet, so not a complete "populate".
Well, depending on the actual memory backend ...
>
> That'll be good enough for live snapshot as uffd-wp works for zero pages,
> however I'm just afraid it may stop working for some new users of it when zero
> pages won't suffice.
I thought about that as well. But snapshots/migration will read all
memory either way and consume real memory when there is no shared zero
page. So it's just shifting the point in time when we allocate all these
pages I guess.
>
> Maybe some comment would help too?
>
Yes, will do, thanks!
--
Thanks,
David / dhildenb
>> That'll be good enough for live snapshot as uffd-wp works for zero pages, >> however I'm just afraid it may stop working for some new users of it when zero >> pages won't suffice. > > I thought about that as well. But snapshots/migration will read all > memory either way and consume real memory when there is no shared zero > page. So it's just shifting the point in time when we allocate all these > pages I guess. ... thinking again, even when populating on shmem and friends there is nothing stopping pages from getting mapped out again. What would happen when trying uffd-wp protection on a pte_none() in your current shmem implementation? Will it lookup if there is something in the page cache (not a hole) and set a PTE marker? Or will it simply skip as there is currently nothing in the page table? Or will it simply unconditionally install a PTE marker, even if there is a hole? Having an uffd-wp mode that doesn't require pre-population would really be great. I remember you shared prototypes. -- Thanks, David / dhildenb
On Fri, Sep 03, 2021 at 09:58:06AM +0200, David Hildenbrand wrote: > > > That'll be good enough for live snapshot as uffd-wp works for zero pages, > > > however I'm just afraid it may stop working for some new users of it when zero > > > pages won't suffice. > > > > I thought about that as well. But snapshots/migration will read all > > memory either way and consume real memory when there is no shared zero > > page. So it's just shifting the point in time when we allocate all these > > pages I guess. > > ... thinking again, even when populating on shmem and friends there is > nothing stopping pages from getting mapped out again. > > What would happen when trying uffd-wp protection on a pte_none() in your > current shmem implementation? Will it lookup if there is something in the > page cache (not a hole) and set a PTE marker? Or will it simply skip as > there is currently nothing in the page table? Or will it simply > unconditionally install a PTE marker, even if there is a hole? It (will - I haven't rebased and posted) sets a pte marker. So uffd-wp will always work on read prefault irrelevant of memory type in the future. > > Having an uffd-wp mode that doesn't require pre-population would really be > great. I remember you shared prototypes. Yes, I planned to do that after the shmem bits, because they have some conflict. I don't want to mess up more with the current series either, which is already hard to push, which is very unfortunate. -- Peter Xu
On 03.09.21 21:20, Peter Xu wrote: > On Fri, Sep 03, 2021 at 09:58:06AM +0200, David Hildenbrand wrote: >>>> That'll be good enough for live snapshot as uffd-wp works for zero pages, >>>> however I'm just afraid it may stop working for some new users of it when zero >>>> pages won't suffice. >>> >>> I thought about that as well. But snapshots/migration will read all >>> memory either way and consume real memory when there is no shared zero >>> page. So it's just shifting the point in time when we allocate all these >>> pages I guess. >> >> ... thinking again, even when populating on shmem and friends there is >> nothing stopping pages from getting mapped out again. >> >> What would happen when trying uffd-wp protection on a pte_none() in your >> current shmem implementation? Will it lookup if there is something in the >> page cache (not a hole) and set a PTE marker? Or will it simply skip as >> there is currently nothing in the page table? Or will it simply >> unconditionally install a PTE marker, even if there is a hole? > > It (will - I haven't rebased and posted) sets a pte marker. So uffd-wp will > always work on read prefault irrelevant of memory type in the future. > >> >> Having an uffd-wp mode that doesn't require pre-population would really be >> great. I remember you shared prototypes. > > Yes, I planned to do that after the shmem bits, because they have some > conflict. I don't want to mess up more with the current series either, which is > already hard to push, which is very unfortunate. > Yeah ... alternatively, we could simply populate the shared zeropage on private anonymous memory when trying protecting a pte_none(). That might actually be a very elegant solution. -- Thanks, David / dhildenb
On 03.09.21 21:40, David Hildenbrand wrote: > On 03.09.21 21:20, Peter Xu wrote: >> On Fri, Sep 03, 2021 at 09:58:06AM +0200, David Hildenbrand wrote: >>>>> That'll be good enough for live snapshot as uffd-wp works for zero pages, >>>>> however I'm just afraid it may stop working for some new users of it when zero >>>>> pages won't suffice. >>>> >>>> I thought about that as well. But snapshots/migration will read all >>>> memory either way and consume real memory when there is no shared zero >>>> page. So it's just shifting the point in time when we allocate all these >>>> pages I guess. >>> >>> ... thinking again, even when populating on shmem and friends there is >>> nothing stopping pages from getting mapped out again. >>> >>> What would happen when trying uffd-wp protection on a pte_none() in your >>> current shmem implementation? Will it lookup if there is something in the >>> page cache (not a hole) and set a PTE marker? Or will it simply skip as >>> there is currently nothing in the page table? Or will it simply >>> unconditionally install a PTE marker, even if there is a hole? >> >> It (will - I haven't rebased and posted) sets a pte marker. So uffd-wp will >> always work on read prefault irrelevant of memory type in the future. >> >>> >>> Having an uffd-wp mode that doesn't require pre-population would really be >>> great. I remember you shared prototypes. >> >> Yes, I planned to do that after the shmem bits, because they have some >> conflict. I don't want to mess up more with the current series either, which is >> already hard to push, which is very unfortunate. >> > > Yeah ... alternatively, we could simply populate the shared zeropage on > private anonymous memory when trying protecting a pte_none(). That might > actually be a very elegant solution. > Oh well, it's late in Germany ... doing it properly avoids having to modify/allocate page tables completely. So that is certainly the better approach. -- Thanks, David / dhildenb
© 2016 - 2026 Red Hat, Inc.