The kernel's memory management subsystem provides a dedicated interface,
pagetable_free(), for freeing page table pages. Updates two call sites to
use pagetable_free() instead of the lower-level __free_page() or
free_pages(). This improves code consistency and clarity, and ensures the
correct freeing mechanism is used.
Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Acked-by: David Hildenbrand <david@redhat.com>
Acked-by: Mike Rapoport (Microsoft) <rppt@kernel.org>
---
arch/x86/mm/init_64.c | 2 +-
arch/x86/mm/pat/set_memory.c | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index 0e4270e20fad..3d9a5e4ccaa4 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -1031,7 +1031,7 @@ static void __meminit free_pagetable(struct page *page, int order)
free_reserved_pages(page, nr_pages);
#endif
} else {
- __free_pages(page, order);
+ pagetable_free(page_ptdesc(page));
}
}
diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c
index 970981893c9b..fffb6ef1997d 100644
--- a/arch/x86/mm/pat/set_memory.c
+++ b/arch/x86/mm/pat/set_memory.c
@@ -429,7 +429,7 @@ static void cpa_collapse_large_pages(struct cpa_data *cpa)
list_for_each_entry_safe(ptdesc, tmp, &pgtables, pt_list) {
list_del(&ptdesc->pt_list);
- __free_page(ptdesc_page(ptdesc));
+ pagetable_free(ptdesc);
}
}
--
2.43.0
On Wed, Oct 22, 2025 at 04:26:32PM +0800, Lu Baolu wrote:
> The kernel's memory management subsystem provides a dedicated interface,
> pagetable_free(), for freeing page table pages. Updates two call sites to
> use pagetable_free() instead of the lower-level __free_page() or
> free_pages(). This improves code consistency and clarity, and ensures the
> correct freeing mechanism is used.
In doing these ptdesc calls here, we're running into issues with the
concurrent work around ptdescs: Allocating frozen page tables[1] and
separately allocating ptdesc[2].
What we're seeing is attempts to cast a page that has still been
allocated by the regular page allocator to a ptdesc - which won't work
anymore.
My hunch is we want alot of the code in pat/set_memory.c to be using ptdescs
aka page table descriptors. At least all the allocations/frees for now.
Does that seem right? I'm not really familiar with this code though...
[1] https://lore.kernel.org/linux-mm/202511172257.ffd96dab-lkp@intel.com/T/#mf68f9c13f4b188eac08ae261c0172afe81a75827
[2] https://lore.kernel.org/linux-mm/20251020001652.2116669-1-willy@infradead.org/T/#md72f66473e017d6f3ce277405ad115e71898f418
> Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
> Acked-by: David Hildenbrand <david@redhat.com>
> Acked-by: Mike Rapoport (Microsoft) <rppt@kernel.org>
> ---
> arch/x86/mm/init_64.c | 2 +-
> arch/x86/mm/pat/set_memory.c | 2 +-
> 2 files changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
> index 0e4270e20fad..3d9a5e4ccaa4 100644
> --- a/arch/x86/mm/init_64.c
> +++ b/arch/x86/mm/init_64.c
> @@ -1031,7 +1031,7 @@ static void __meminit free_pagetable(struct page *page, int order)
> free_reserved_pages(page, nr_pages);
> #endif
> } else {
> - __free_pages(page, order);
> + pagetable_free(page_ptdesc(page));
> }
> }
>
> diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c
> index 970981893c9b..fffb6ef1997d 100644
> --- a/arch/x86/mm/pat/set_memory.c
> +++ b/arch/x86/mm/pat/set_memory.c
> @@ -429,7 +429,7 @@ static void cpa_collapse_large_pages(struct cpa_data *cpa)
>
> list_for_each_entry_safe(ptdesc, tmp, &pgtables, pt_list) {
> list_del(&ptdesc->pt_list);
> - __free_page(ptdesc_page(ptdesc));
> + pagetable_free(ptdesc);
> }
> }
>
> --
> 2.43.0
>
On Mon, Nov 17, 2025 at 06:14:22PM -0800, Vishal Moola (Oracle) wrote:
> On Wed, Oct 22, 2025 at 04:26:32PM +0800, Lu Baolu wrote:
> > The kernel's memory management subsystem provides a dedicated interface,
> > pagetable_free(), for freeing page table pages. Updates two call sites to
> > use pagetable_free() instead of the lower-level __free_page() or
> > free_pages(). This improves code consistency and clarity, and ensures the
> > correct freeing mechanism is used.
>
> In doing these ptdesc calls here, we're running into issues with the
> concurrent work around ptdescs: Allocating frozen page tables[1] and
> separately allocating ptdesc[2].
>
> What we're seeing is attempts to cast a page that has still been
> allocated by the regular page allocator to a ptdesc - which won't work
> anymore.
>
> My hunch is we want alot of the code in pat/set_memory.c to be using ptdescs
> aka page table descriptors. At least all the allocations/frees for now.
> Does that seem right? I'm not really familiar with this code though...
Yeah, that sounds about right. Allocations in x86::set_memory should use
pXd_alloc_one_kernel()
> [1] https://lore.kernel.org/linux-mm/202511172257.ffd96dab-lkp@intel.com/T/#mf68f9c13f4b188eac08ae261c0172afe81a75827
> [2] https://lore.kernel.org/linux-mm/20251020001652.2116669-1-willy@infradead.org/T/#md72f66473e017d6f3ce277405ad115e71898f418
>
> > Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
> > Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
> > Acked-by: David Hildenbrand <david@redhat.com>
> > Acked-by: Mike Rapoport (Microsoft) <rppt@kernel.org>
> > ---
> > arch/x86/mm/init_64.c | 2 +-
> > arch/x86/mm/pat/set_memory.c | 2 +-
> > 2 files changed, 2 insertions(+), 2 deletions(-)
> >
> > diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
> > index 0e4270e20fad..3d9a5e4ccaa4 100644
> > --- a/arch/x86/mm/init_64.c
> > +++ b/arch/x86/mm/init_64.c
> > @@ -1031,7 +1031,7 @@ static void __meminit free_pagetable(struct page *page, int order)
> > free_reserved_pages(page, nr_pages);
> > #endif
> > } else {
> > - __free_pages(page, order);
> > + pagetable_free(page_ptdesc(page));
> > }
> > }
> >
> > diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c
> > index 970981893c9b..fffb6ef1997d 100644
> > --- a/arch/x86/mm/pat/set_memory.c
> > +++ b/arch/x86/mm/pat/set_memory.c
> > @@ -429,7 +429,7 @@ static void cpa_collapse_large_pages(struct cpa_data *cpa)
> >
> > list_for_each_entry_safe(ptdesc, tmp, &pgtables, pt_list) {
> > list_del(&ptdesc->pt_list);
> > - __free_page(ptdesc_page(ptdesc));
> > + pagetable_free(ptdesc);
> > }
> > }
> >
> > --
> > 2.43.0
> >
--
Sincerely yours,
Mike.
© 2016 - 2025 Red Hat, Inc.