kernel/liveupdate/kexec_handover.c | 8 ++++++++ 1 file changed, 8 insertions(+)
From: Ran Xiaokai <ran.xiaokai@zte.com.cn>
Memblock pages (including reserved memory) should have their allocation
tags initialized to CODETAG_EMPTY via clear_page_tag_ref() before being
released to the page allocator. When kho restores pages through
kho_restore_page(), missing this call causes mismatched
allocation/deallocation tracking and below warning message:
alloc_tag was not set
WARNING: include/linux/alloc_tag.h:164 at ___free_pages+0xb8/0x260, CPU#1: swapper/0/1
RIP: 0010:___free_pages+0xb8/0x260
kho_restore_vmalloc+0x187/0x2e0
kho_test_init+0x3c4/0xa30
do_one_initcall+0x62/0x2b0
kernel_init_freeable+0x25b/0x480
kernel_init+0x1a/0x1c0
ret_from_fork+0x2d1/0x360
Add missing clear_page_tag_ref() annotation in kho_restore_page() to
fix this.
Fixes: fc33e4b44b27 ("kexec: enable KHO support for memory preservation")
Cc: <stable@vger.kernel.org>
Signed-off-by: Ran Xiaokai <ran.xiaokai@zte.com.cn>
---
v2 -> v3:
- also call clear_page_tag_ref() for non-compound order-0 tail pages
kernel/liveupdate/kexec_handover.c | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/kernel/liveupdate/kexec_handover.c b/kernel/liveupdate/kexec_handover.c
index d4482b6e3cae..96767b106cac 100644
--- a/kernel/liveupdate/kexec_handover.c
+++ b/kernel/liveupdate/kexec_handover.c
@@ -255,6 +255,14 @@ static struct page *kho_restore_page(phys_addr_t phys, bool is_folio)
if (is_folio && info.order)
prep_compound_page(page, info.order);
+ /* Always mark headpage's codetag as empty to avoid accounting mismatch */
+ clear_page_tag_ref(page);
+ if (!is_folio) {
+ /* Also do that for the non-compound tail pages */
+ for (unsigned int i = 1; i < nr_pages; i++)
+ clear_page_tag_ref(page + i);
+ }
+
adjust_managed_page_count(page, nr_pages);
return page;
}
--
2.25.1
On Thu, Jan 22, 2026 at 01:27:40PM +0000, ranxiaokai627@163.com wrote:
> From: Ran Xiaokai <ran.xiaokai@zte.com.cn>
>
> Memblock pages (including reserved memory) should have their allocation
> tags initialized to CODETAG_EMPTY via clear_page_tag_ref() before being
> released to the page allocator. When kho restores pages through
> kho_restore_page(), missing this call causes mismatched
> allocation/deallocation tracking and below warning message:
>
> alloc_tag was not set
> WARNING: include/linux/alloc_tag.h:164 at ___free_pages+0xb8/0x260, CPU#1: swapper/0/1
> RIP: 0010:___free_pages+0xb8/0x260
> kho_restore_vmalloc+0x187/0x2e0
> kho_test_init+0x3c4/0xa30
> do_one_initcall+0x62/0x2b0
> kernel_init_freeable+0x25b/0x480
> kernel_init+0x1a/0x1c0
> ret_from_fork+0x2d1/0x360
>
> Add missing clear_page_tag_ref() annotation in kho_restore_page() to
> fix this.
>
> Fixes: fc33e4b44b27 ("kexec: enable KHO support for memory preservation")
> Cc: <stable@vger.kernel.org>
> Signed-off-by: Ran Xiaokai <ran.xiaokai@zte.com.cn>
Reviewed-by: Mike Rapoport (Microsoft) <rppt@kernel.org>
> ---
>
> v2 -> v3:
> - also call clear_page_tag_ref() for non-compound order-0 tail pages
>
> kernel/liveupdate/kexec_handover.c | 8 ++++++++
> 1 file changed, 8 insertions(+)
>
> diff --git a/kernel/liveupdate/kexec_handover.c b/kernel/liveupdate/kexec_handover.c
> index d4482b6e3cae..96767b106cac 100644
> --- a/kernel/liveupdate/kexec_handover.c
> +++ b/kernel/liveupdate/kexec_handover.c
> @@ -255,6 +255,14 @@ static struct page *kho_restore_page(phys_addr_t phys, bool is_folio)
> if (is_folio && info.order)
> prep_compound_page(page, info.order);
>
> + /* Always mark headpage's codetag as empty to avoid accounting mismatch */
> + clear_page_tag_ref(page);
> + if (!is_folio) {
> + /* Also do that for the non-compound tail pages */
> + for (unsigned int i = 1; i < nr_pages; i++)
> + clear_page_tag_ref(page + i);
> + }
> +
> adjust_managed_page_count(page, nr_pages);
> return page;
> }
> --
> 2.25.1
>
>
--
Sincerely yours,
Mike.
On Thu, Jan 22, 2026 at 8:28 AM <ranxiaokai627@163.com> wrote:
>
> From: Ran Xiaokai <ran.xiaokai@zte.com.cn>
>
> Memblock pages (including reserved memory) should have their allocation
> tags initialized to CODETAG_EMPTY via clear_page_tag_ref() before being
> released to the page allocator. When kho restores pages through
> kho_restore_page(), missing this call causes mismatched
> allocation/deallocation tracking and below warning message:
>
> alloc_tag was not set
> WARNING: include/linux/alloc_tag.h:164 at ___free_pages+0xb8/0x260, CPU#1: swapper/0/1
> RIP: 0010:___free_pages+0xb8/0x260
> kho_restore_vmalloc+0x187/0x2e0
> kho_test_init+0x3c4/0xa30
> do_one_initcall+0x62/0x2b0
> kernel_init_freeable+0x25b/0x480
> kernel_init+0x1a/0x1c0
> ret_from_fork+0x2d1/0x360
>
> Add missing clear_page_tag_ref() annotation in kho_restore_page() to
> fix this.
>
> Fixes: fc33e4b44b27 ("kexec: enable KHO support for memory preservation")
> Cc: <stable@vger.kernel.org>
> Signed-off-by: Ran Xiaokai <ran.xiaokai@zte.com.cn>
> ---
>
> v2 -> v3:
> - also call clear_page_tag_ref() for non-compound order-0 tail pages
>
> kernel/liveupdate/kexec_handover.c | 8 ++++++++
> 1 file changed, 8 insertions(+)
>
> diff --git a/kernel/liveupdate/kexec_handover.c b/kernel/liveupdate/kexec_handover.c
> index d4482b6e3cae..96767b106cac 100644
> --- a/kernel/liveupdate/kexec_handover.c
> +++ b/kernel/liveupdate/kexec_handover.c
> @@ -255,6 +255,14 @@ static struct page *kho_restore_page(phys_addr_t phys, bool is_folio)
> if (is_folio && info.order)
> prep_compound_page(page, info.order);
>
> + /* Always mark headpage's codetag as empty to avoid accounting mismatch */
> + clear_page_tag_ref(page);
> + if (!is_folio) {
> + /* Also do that for the non-compound tail pages */
> + for (unsigned int i = 1; i < nr_pages; i++)
> + clear_page_tag_ref(page + i);
> + }
> +
Reviewed-by: Pasha Tatashin <pasha.tatashin@soleen.com>
Pasha
> adjust_managed_page_count(page, nr_pages);
> return page;
> }
> --
> 2.25.1
>
>
>
On Thu, Jan 22 2026, ranxiaokai627@163.com wrote:
> From: Ran Xiaokai <ran.xiaokai@zte.com.cn>
>
> Memblock pages (including reserved memory) should have their allocation
> tags initialized to CODETAG_EMPTY via clear_page_tag_ref() before being
> released to the page allocator. When kho restores pages through
> kho_restore_page(), missing this call causes mismatched
> allocation/deallocation tracking and below warning message:
>
> alloc_tag was not set
> WARNING: include/linux/alloc_tag.h:164 at ___free_pages+0xb8/0x260, CPU#1: swapper/0/1
> RIP: 0010:___free_pages+0xb8/0x260
> kho_restore_vmalloc+0x187/0x2e0
> kho_test_init+0x3c4/0xa30
> do_one_initcall+0x62/0x2b0
> kernel_init_freeable+0x25b/0x480
> kernel_init+0x1a/0x1c0
> ret_from_fork+0x2d1/0x360
>
> Add missing clear_page_tag_ref() annotation in kho_restore_page() to
> fix this.
>
> Fixes: fc33e4b44b27 ("kexec: enable KHO support for memory preservation")
> Cc: <stable@vger.kernel.org>
> Signed-off-by: Ran Xiaokai <ran.xiaokai@zte.com.cn>
> ---
>
> v2 -> v3:
> - also call clear_page_tag_ref() for non-compound order-0 tail pages
>
> kernel/liveupdate/kexec_handover.c | 8 ++++++++
> 1 file changed, 8 insertions(+)
>
> diff --git a/kernel/liveupdate/kexec_handover.c b/kernel/liveupdate/kexec_handover.c
> index d4482b6e3cae..96767b106cac 100644
> --- a/kernel/liveupdate/kexec_handover.c
> +++ b/kernel/liveupdate/kexec_handover.c
> @@ -255,6 +255,14 @@ static struct page *kho_restore_page(phys_addr_t phys, bool is_folio)
> if (is_folio && info.order)
> prep_compound_page(page, info.order);
>
> + /* Always mark headpage's codetag as empty to avoid accounting mismatch */
> + clear_page_tag_ref(page);
> + if (!is_folio) {
> + /* Also do that for the non-compound tail pages */
> + for (unsigned int i = 1; i < nr_pages; i++)
> + clear_page_tag_ref(page + i);
> + }
> +
I think it would be a little bit better if we just did this in the loop
above instead of looping again. But I think it is fine for now. I can
fix it up in the next release.
Reviewed-by: Pratyush Yadav <pratyush@kernel.org>
> adjust_managed_page_count(page, nr_pages);
> return page;
> }
--
Regards,
Pratyush Yadav
© 2016 - 2026 Red Hat, Inc.