[PATCH] migration/multifd: Fix clearing of mapped-ram zero pages

Fabiano Rosas posted 1 patch 1 month, 1 week ago
Patches applied successfully (tree, apply log)
git fetch https://github.com/patchew-project/qemu tags/patchew/20240321201242.6009-1-farosas@suse.de
Maintainers: Peter Xu <peterx@redhat.com>, Fabiano Rosas <farosas@suse.de>
migration/multifd.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
[PATCH] migration/multifd: Fix clearing of mapped-ram zero pages
Posted by Fabiano Rosas 1 month, 1 week ago
When the zero page detection is done in the multifd threads, we need
to iterate the second part of the pages->offset array and clear the
file bitmap for each zero page. The piece of code we merged to do that
is wrong.

The reason this has passed all the tests is because the bitmap is
initialized with zeroes already, so clearing the bits only really has
an effect during live migration and when a data page goes from having
data to no data.

Fixes: 303e6f54f9 ("migration/multifd: Implement zero page transmission on the multifd thread.")
Signed-off-by: Fabiano Rosas <farosas@suse.de>
---
CI run: https://gitlab.com/farosas/qemu/-/pipelines/1222882269
---
 migration/multifd.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/migration/multifd.c b/migration/multifd.c
index d2f0238f70..2802afe79d 100644
--- a/migration/multifd.c
+++ b/migration/multifd.c
@@ -111,7 +111,6 @@ void multifd_send_channel_created(void)
 static void multifd_set_file_bitmap(MultiFDSendParams *p)
 {
     MultiFDPages_t *pages = p->pages;
-    uint32_t zero_num = p->pages->num - p->pages->normal_num;
 
     assert(pages->block);
 
@@ -119,7 +118,7 @@ static void multifd_set_file_bitmap(MultiFDSendParams *p)
         ramblock_set_file_bmap_atomic(pages->block, pages->offset[i], true);
     }
 
-    for (int i = p->pages->num; i < zero_num; i++) {
+    for (int i = p->pages->normal_num; i < p->pages->num; i++) {
         ramblock_set_file_bmap_atomic(pages->block, pages->offset[i], false);
     }
 }
-- 
2.35.3
Re: [PATCH] migration/multifd: Fix clearing of mapped-ram zero pages
Posted by Peter Xu 1 month, 1 week ago
On Thu, Mar 21, 2024 at 05:12:42PM -0300, Fabiano Rosas wrote:
> When the zero page detection is done in the multifd threads, we need
> to iterate the second part of the pages->offset array and clear the
> file bitmap for each zero page. The piece of code we merged to do that
> is wrong.
> 
> The reason this has passed all the tests is because the bitmap is
> initialized with zeroes already, so clearing the bits only really has
> an effect during live migration and when a data page goes from having
> data to no data.
> 
> Fixes: 303e6f54f9 ("migration/multifd: Implement zero page transmission on the multifd thread.")
> Signed-off-by: Fabiano Rosas <farosas@suse.de>
> ---
> CI run: https://gitlab.com/farosas/qemu/-/pipelines/1222882269
> ---
>  migration/multifd.c | 3 +--
>  1 file changed, 1 insertion(+), 2 deletions(-)
> 
> diff --git a/migration/multifd.c b/migration/multifd.c
> index d2f0238f70..2802afe79d 100644
> --- a/migration/multifd.c
> +++ b/migration/multifd.c
> @@ -111,7 +111,6 @@ void multifd_send_channel_created(void)
>  static void multifd_set_file_bitmap(MultiFDSendParams *p)
>  {
>      MultiFDPages_t *pages = p->pages;
> -    uint32_t zero_num = p->pages->num - p->pages->normal_num;
>  
>      assert(pages->block);
>  
> @@ -119,7 +118,7 @@ static void multifd_set_file_bitmap(MultiFDSendParams *p)
>          ramblock_set_file_bmap_atomic(pages->block, pages->offset[i], true);
>      }
>  
> -    for (int i = p->pages->num; i < zero_num; i++) {
> +    for (int i = p->pages->normal_num; i < p->pages->num; i++) {
>          ramblock_set_file_bmap_atomic(pages->block, pages->offset[i], false);
>      }
>  }

Hmm, a challenging one even if it reads obvious.. :)

queued for 9.0-rc1, thanks.

-- 
Peter Xu