For khugepaged to support different mTHP orders, we must generalize this
to check if the PMD is not shared by another VMA and the order is enabled.
To ensure madvise_collapse can support working on mTHP orders without the
PMD order enabled, we need to convert hugepage_vma_revalidate to take a
bitmap of orders.
No functional change in this patch.
Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com>
Acked-by: David Hildenbrand <david@redhat.com>
Co-developed-by: Dev Jain <dev.jain@arm.com>
Signed-off-by: Dev Jain <dev.jain@arm.com>
Signed-off-by: Nico Pache <npache@redhat.com>
---
mm/khugepaged.c | 13 ++++++++-----
1 file changed, 8 insertions(+), 5 deletions(-)
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index b7b98aebb670..2d192ec961d2 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -917,7 +917,7 @@ static int collapse_find_target_node(struct collapse_control *cc)
static int hugepage_vma_revalidate(struct mm_struct *mm, unsigned long address,
bool expect_anon,
struct vm_area_struct **vmap,
- struct collapse_control *cc)
+ struct collapse_control *cc, unsigned long orders)
{
struct vm_area_struct *vma;
enum tva_type type = cc->is_khugepaged ? TVA_KHUGEPAGED :
@@ -930,9 +930,10 @@ static int hugepage_vma_revalidate(struct mm_struct *mm, unsigned long address,
if (!vma)
return SCAN_VMA_NULL;
+ /* Always check the PMD order to insure its not shared by another VMA */
if (!thp_vma_suitable_order(vma, address, PMD_ORDER))
return SCAN_ADDRESS_RANGE;
- if (!thp_vma_allowable_order(vma, vma->vm_flags, type, PMD_ORDER))
+ if (!thp_vma_allowable_orders(vma, vma->vm_flags, type, orders))
return SCAN_VMA_CHECK;
/*
* Anon VMA expected, the address may be unmapped then
@@ -1134,7 +1135,8 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
goto out_nolock;
mmap_read_lock(mm);
- result = hugepage_vma_revalidate(mm, address, true, &vma, cc);
+ result = hugepage_vma_revalidate(mm, address, true, &vma, cc,
+ BIT(HPAGE_PMD_ORDER));
if (result != SCAN_SUCCEED) {
mmap_read_unlock(mm);
goto out_nolock;
@@ -1168,7 +1170,8 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address,
* mmap_lock.
*/
mmap_write_lock(mm);
- result = hugepage_vma_revalidate(mm, address, true, &vma, cc);
+ result = hugepage_vma_revalidate(mm, address, true, &vma, cc,
+ BIT(HPAGE_PMD_ORDER));
if (result != SCAN_SUCCEED)
goto out_up_write;
/* check if the pmd is still valid */
@@ -2807,7 +2810,7 @@ int madvise_collapse(struct vm_area_struct *vma, unsigned long start,
mmap_read_lock(mm);
mmap_locked = true;
result = hugepage_vma_revalidate(mm, addr, false, &vma,
- cc);
+ cc, BIT(HPAGE_PMD_ORDER));
if (result != SCAN_SUCCEED) {
last_fail = result;
goto out_nolock;
--
2.50.1
Hi, Nico Some nit below. On Tue, Aug 19, 2025 at 07:41:55AM -0600, Nico Pache wrote: >For khugepaged to support different mTHP orders, we must generalize this >to check if the PMD is not shared by another VMA and the order is enabled. > >To ensure madvise_collapse can support working on mTHP orders without the >PMD order enabled, we need to convert hugepage_vma_revalidate to take a >bitmap of orders. > >No functional change in this patch. > >Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com> >Acked-by: David Hildenbrand <david@redhat.com> >Co-developed-by: Dev Jain <dev.jain@arm.com> >Signed-off-by: Dev Jain <dev.jain@arm.com> >Signed-off-by: Nico Pache <npache@redhat.com> >--- > mm/khugepaged.c | 13 ++++++++----- > 1 file changed, 8 insertions(+), 5 deletions(-) > >diff --git a/mm/khugepaged.c b/mm/khugepaged.c >index b7b98aebb670..2d192ec961d2 100644 >--- a/mm/khugepaged.c >+++ b/mm/khugepaged.c There is a comment above this function, which says "revalidate vma before taking mmap_lock". I am afraid it is "after taking mmap_lock"? or "after taking mmap_lock again"? >@@ -917,7 +917,7 @@ static int collapse_find_target_node(struct collapse_control *cc) > static int hugepage_vma_revalidate(struct mm_struct *mm, unsigned long address, > bool expect_anon, > struct vm_area_struct **vmap, >- struct collapse_control *cc) >+ struct collapse_control *cc, unsigned long orders) > { > struct vm_area_struct *vma; > enum tva_type type = cc->is_khugepaged ? TVA_KHUGEPAGED : >@@ -930,9 +930,10 @@ static int hugepage_vma_revalidate(struct mm_struct *mm, unsigned long address, > if (!vma) > return SCAN_VMA_NULL; > >+ /* Always check the PMD order to insure its not shared by another VMA */ > if (!thp_vma_suitable_order(vma, address, PMD_ORDER)) > return SCAN_ADDRESS_RANGE; >- if (!thp_vma_allowable_order(vma, vma->vm_flags, type, PMD_ORDER)) >+ if (!thp_vma_allowable_orders(vma, vma->vm_flags, type, orders)) > return SCAN_VMA_CHECK; > /* > * Anon VMA expected, the address may be unmapped then Below is a comment, "thp_vma_allowable_order may return". Since you use thp_vma_allowable_orders, maybe we need to change the comment too. >@@ -1134,7 +1135,8 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address, > goto out_nolock; > > mmap_read_lock(mm); >- result = hugepage_vma_revalidate(mm, address, true, &vma, cc); >+ result = hugepage_vma_revalidate(mm, address, true, &vma, cc, >+ BIT(HPAGE_PMD_ORDER)); > if (result != SCAN_SUCCEED) { > mmap_read_unlock(mm); > goto out_nolock; >@@ -1168,7 +1170,8 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address, > * mmap_lock. > */ > mmap_write_lock(mm); >- result = hugepage_vma_revalidate(mm, address, true, &vma, cc); >+ result = hugepage_vma_revalidate(mm, address, true, &vma, cc, >+ BIT(HPAGE_PMD_ORDER)); > if (result != SCAN_SUCCEED) > goto out_up_write; > /* check if the pmd is still valid */ >@@ -2807,7 +2810,7 @@ int madvise_collapse(struct vm_area_struct *vma, unsigned long start, > mmap_read_lock(mm); > mmap_locked = true; > result = hugepage_vma_revalidate(mm, addr, false, &vma, >- cc); >+ cc, BIT(HPAGE_PMD_ORDER)); > if (result != SCAN_SUCCEED) { > last_fail = result; > goto out_nolock; >-- >2.50.1 > -- Wei Yang Help you, Help me
On Sat, Aug 23, 2025 at 7:37 PM Wei Yang <richard.weiyang@gmail.com> wrote: > > Hi, Nico > > Some nit below. > > On Tue, Aug 19, 2025 at 07:41:55AM -0600, Nico Pache wrote: > >For khugepaged to support different mTHP orders, we must generalize this > >to check if the PMD is not shared by another VMA and the order is enabled. > > > >To ensure madvise_collapse can support working on mTHP orders without the > >PMD order enabled, we need to convert hugepage_vma_revalidate to take a > >bitmap of orders. > > > >No functional change in this patch. > > > >Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com> > >Acked-by: David Hildenbrand <david@redhat.com> > >Co-developed-by: Dev Jain <dev.jain@arm.com> > >Signed-off-by: Dev Jain <dev.jain@arm.com> > >Signed-off-by: Nico Pache <npache@redhat.com> > >--- > > mm/khugepaged.c | 13 ++++++++----- > > 1 file changed, 8 insertions(+), 5 deletions(-) > > > >diff --git a/mm/khugepaged.c b/mm/khugepaged.c > >index b7b98aebb670..2d192ec961d2 100644 > >--- a/mm/khugepaged.c > >+++ b/mm/khugepaged.c > > There is a comment above this function, which says "revalidate vma before > taking mmap_lock". > > I am afraid it is "after taking mmap_lock"? or "after taking mmap_lock again"? Good catch, never noticed that. I updated the comment! > > >@@ -917,7 +917,7 @@ static int collapse_find_target_node(struct collapse_control *cc) > > static int hugepage_vma_revalidate(struct mm_struct *mm, unsigned long address, > > bool expect_anon, > > struct vm_area_struct **vmap, > >- struct collapse_control *cc) > >+ struct collapse_control *cc, unsigned long orders) > > { > > struct vm_area_struct *vma; > > enum tva_type type = cc->is_khugepaged ? TVA_KHUGEPAGED : > >@@ -930,9 +930,10 @@ static int hugepage_vma_revalidate(struct mm_struct *mm, unsigned long address, > > if (!vma) > > return SCAN_VMA_NULL; > > > >+ /* Always check the PMD order to insure its not shared by another VMA */ > > if (!thp_vma_suitable_order(vma, address, PMD_ORDER)) > > return SCAN_ADDRESS_RANGE; > >- if (!thp_vma_allowable_order(vma, vma->vm_flags, type, PMD_ORDER)) > >+ if (!thp_vma_allowable_orders(vma, vma->vm_flags, type, orders)) > > return SCAN_VMA_CHECK; > > /* > > * Anon VMA expected, the address may be unmapped then > > Below is a comment, "thp_vma_allowable_order may return". > > Since you use thp_vma_allowable_orders, maybe we need to change the comment > too. Ack! Thanks for the review! > > >@@ -1134,7 +1135,8 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address, > > goto out_nolock; > > > > mmap_read_lock(mm); > >- result = hugepage_vma_revalidate(mm, address, true, &vma, cc); > >+ result = hugepage_vma_revalidate(mm, address, true, &vma, cc, > >+ BIT(HPAGE_PMD_ORDER)); > > if (result != SCAN_SUCCEED) { > > mmap_read_unlock(mm); > > goto out_nolock; > >@@ -1168,7 +1170,8 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address, > > * mmap_lock. > > */ > > mmap_write_lock(mm); > >- result = hugepage_vma_revalidate(mm, address, true, &vma, cc); > >+ result = hugepage_vma_revalidate(mm, address, true, &vma, cc, > >+ BIT(HPAGE_PMD_ORDER)); > > if (result != SCAN_SUCCEED) > > goto out_up_write; > > /* check if the pmd is still valid */ > >@@ -2807,7 +2810,7 @@ int madvise_collapse(struct vm_area_struct *vma, unsigned long start, > > mmap_read_lock(mm); > > mmap_locked = true; > > result = hugepage_vma_revalidate(mm, addr, false, &vma, > >- cc); > >+ cc, BIT(HPAGE_PMD_ORDER)); > > if (result != SCAN_SUCCEED) { > > last_fail = result; > > goto out_nolock; > >-- > >2.50.1 > > > > -- > Wei Yang > Help you, Help me >
On Tue, Aug 19, 2025 at 07:41:55AM -0600, Nico Pache wrote: > For khugepaged to support different mTHP orders, we must generalize this > to check if the PMD is not shared by another VMA and the order is enabled. > > To ensure madvise_collapse can support working on mTHP orders without the > PMD order enabled, we need to convert hugepage_vma_revalidate to take a > bitmap of orders. > > No functional change in this patch. > > Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com> > Acked-by: David Hildenbrand <david@redhat.com> > Co-developed-by: Dev Jain <dev.jain@arm.com> > Signed-off-by: Dev Jain <dev.jain@arm.com> > Signed-off-by: Nico Pache <npache@redhat.com> LGTM (modulo nit/query below) so: Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> > --- > mm/khugepaged.c | 13 ++++++++----- > 1 file changed, 8 insertions(+), 5 deletions(-) > > diff --git a/mm/khugepaged.c b/mm/khugepaged.c > index b7b98aebb670..2d192ec961d2 100644 > --- a/mm/khugepaged.c > +++ b/mm/khugepaged.c > @@ -917,7 +917,7 @@ static int collapse_find_target_node(struct collapse_control *cc) > static int hugepage_vma_revalidate(struct mm_struct *mm, unsigned long address, > bool expect_anon, > struct vm_area_struct **vmap, > - struct collapse_control *cc) > + struct collapse_control *cc, unsigned long orders) > { > struct vm_area_struct *vma; > enum tva_type type = cc->is_khugepaged ? TVA_KHUGEPAGED : > @@ -930,9 +930,10 @@ static int hugepage_vma_revalidate(struct mm_struct *mm, unsigned long address, > if (!vma) > return SCAN_VMA_NULL; > > + /* Always check the PMD order to insure its not shared by another VMA */ NIT: ensure not insure. > if (!thp_vma_suitable_order(vma, address, PMD_ORDER)) > return SCAN_ADDRESS_RANGE; > - if (!thp_vma_allowable_order(vma, vma->vm_flags, type, PMD_ORDER)) > + if (!thp_vma_allowable_orders(vma, vma->vm_flags, type, orders)) > return SCAN_VMA_CHECK; > /* > * Anon VMA expected, the address may be unmapped then > @@ -1134,7 +1135,8 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address, > goto out_nolock; > > mmap_read_lock(mm); > - result = hugepage_vma_revalidate(mm, address, true, &vma, cc); > + result = hugepage_vma_revalidate(mm, address, true, &vma, cc, > + BIT(HPAGE_PMD_ORDER)); Shouldn't this be PMD order? Seems equivalent. > if (result != SCAN_SUCCEED) { > mmap_read_unlock(mm); > goto out_nolock; > @@ -1168,7 +1170,8 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address, > * mmap_lock. > */ > mmap_write_lock(mm); > - result = hugepage_vma_revalidate(mm, address, true, &vma, cc); > + result = hugepage_vma_revalidate(mm, address, true, &vma, cc, > + BIT(HPAGE_PMD_ORDER)); > if (result != SCAN_SUCCEED) > goto out_up_write; > /* check if the pmd is still valid */ > @@ -2807,7 +2810,7 @@ int madvise_collapse(struct vm_area_struct *vma, unsigned long start, > mmap_read_lock(mm); > mmap_locked = true; > result = hugepage_vma_revalidate(mm, addr, false, &vma, > - cc); > + cc, BIT(HPAGE_PMD_ORDER)); > if (result != SCAN_SUCCEED) { > last_fail = result; > goto out_nolock; > -- > 2.50.1 >
On Wed, Aug 20, 2025 at 7:25 AM Lorenzo Stoakes <lorenzo.stoakes@oracle.com> wrote: > > On Tue, Aug 19, 2025 at 07:41:55AM -0600, Nico Pache wrote: > > For khugepaged to support different mTHP orders, we must generalize this > > to check if the PMD is not shared by another VMA and the order is enabled. > > > > To ensure madvise_collapse can support working on mTHP orders without the > > PMD order enabled, we need to convert hugepage_vma_revalidate to take a > > bitmap of orders. > > > > No functional change in this patch. > > > > Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com> > > Acked-by: David Hildenbrand <david@redhat.com> > > Co-developed-by: Dev Jain <dev.jain@arm.com> > > Signed-off-by: Dev Jain <dev.jain@arm.com> > > Signed-off-by: Nico Pache <npache@redhat.com> > > LGTM (modulo nit/query below) so: > > Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Thanks :) > > > --- > > mm/khugepaged.c | 13 ++++++++----- > > 1 file changed, 8 insertions(+), 5 deletions(-) > > > > diff --git a/mm/khugepaged.c b/mm/khugepaged.c > > index b7b98aebb670..2d192ec961d2 100644 > > --- a/mm/khugepaged.c > > +++ b/mm/khugepaged.c > > @@ -917,7 +917,7 @@ static int collapse_find_target_node(struct collapse_control *cc) > > static int hugepage_vma_revalidate(struct mm_struct *mm, unsigned long address, > > bool expect_anon, > > struct vm_area_struct **vmap, > > - struct collapse_control *cc) > > + struct collapse_control *cc, unsigned long orders) > > { > > struct vm_area_struct *vma; > > enum tva_type type = cc->is_khugepaged ? TVA_KHUGEPAGED : > > @@ -930,9 +930,10 @@ static int hugepage_vma_revalidate(struct mm_struct *mm, unsigned long address, > > if (!vma) > > return SCAN_VMA_NULL; > > > > + /* Always check the PMD order to insure its not shared by another VMA */ > > NIT: ensure not insure. ack, ill fix that! > > > if (!thp_vma_suitable_order(vma, address, PMD_ORDER)) > > return SCAN_ADDRESS_RANGE; > > - if (!thp_vma_allowable_order(vma, vma->vm_flags, type, PMD_ORDER)) > > + if (!thp_vma_allowable_orders(vma, vma->vm_flags, type, orders)) > > return SCAN_VMA_CHECK; > > /* > > * Anon VMA expected, the address may be unmapped then > > @@ -1134,7 +1135,8 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address, > > goto out_nolock; > > > > mmap_read_lock(mm); > > - result = hugepage_vma_revalidate(mm, address, true, &vma, cc); > > + result = hugepage_vma_revalidate(mm, address, true, &vma, cc, > > + BIT(HPAGE_PMD_ORDER)); > > Shouldn't this be PMD order? Seems equivalent. Yeah i'm actually not sure why we have both... they seem to be the same thing, but perhaps there is some reason for having two... > > > if (result != SCAN_SUCCEED) { > > mmap_read_unlock(mm); > > goto out_nolock; > > @@ -1168,7 +1170,8 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address, > > * mmap_lock. > > */ > > mmap_write_lock(mm); > > - result = hugepage_vma_revalidate(mm, address, true, &vma, cc); > > + result = hugepage_vma_revalidate(mm, address, true, &vma, cc, > > + BIT(HPAGE_PMD_ORDER)); > > if (result != SCAN_SUCCEED) > > goto out_up_write; > > /* check if the pmd is still valid */ > > @@ -2807,7 +2810,7 @@ int madvise_collapse(struct vm_area_struct *vma, unsigned long start, > > mmap_read_lock(mm); > > mmap_locked = true; > > result = hugepage_vma_revalidate(mm, addr, false, &vma, > > - cc); > > + cc, BIT(HPAGE_PMD_ORDER)); > > if (result != SCAN_SUCCEED) { > > last_fail = result; > > goto out_nolock; > > -- > > 2.50.1 > > >
On Wed, Aug 20, 2025 at 09:40:40AM -0600, Nico Pache wrote: [...] >> >> > if (!thp_vma_suitable_order(vma, address, PMD_ORDER)) >> > return SCAN_ADDRESS_RANGE; >> > - if (!thp_vma_allowable_order(vma, vma->vm_flags, type, PMD_ORDER)) >> > + if (!thp_vma_allowable_orders(vma, vma->vm_flags, type, orders)) >> > return SCAN_VMA_CHECK; >> > /* >> > * Anon VMA expected, the address may be unmapped then >> > @@ -1134,7 +1135,8 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address, >> > goto out_nolock; >> > >> > mmap_read_lock(mm); >> > - result = hugepage_vma_revalidate(mm, address, true, &vma, cc); >> > + result = hugepage_vma_revalidate(mm, address, true, &vma, cc, >> > + BIT(HPAGE_PMD_ORDER)); >> >> Shouldn't this be PMD order? Seems equivalent. >Yeah i'm actually not sure why we have both... they seem to be the >same thing, but perhaps there is some reason for having two... I am confused with these two, PMD_ORDER above and HPAGE_PMD_ORDER from here. Do we have a guide on when to use which? >> >> > if (result != SCAN_SUCCEED) { >> > mmap_read_unlock(mm); >> > goto out_nolock; >> > @@ -1168,7 +1170,8 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address, >> > * mmap_lock. >> > */ >> > mmap_write_lock(mm); >> > - result = hugepage_vma_revalidate(mm, address, true, &vma, cc); >> > + result = hugepage_vma_revalidate(mm, address, true, &vma, cc, >> > + BIT(HPAGE_PMD_ORDER)); >> > if (result != SCAN_SUCCEED) >> > goto out_up_write; >> > /* check if the pmd is still valid */ >> > @@ -2807,7 +2810,7 @@ int madvise_collapse(struct vm_area_struct *vma, unsigned long start, >> > mmap_read_lock(mm); >> > mmap_locked = true; >> > result = hugepage_vma_revalidate(mm, addr, false, &vma, >> > - cc); >> > + cc, BIT(HPAGE_PMD_ORDER)); >> > if (result != SCAN_SUCCEED) { >> > last_fail = result; >> > goto out_nolock; >> > -- >> > 2.50.1 >> > >> > -- Wei Yang Help you, Help me
On 20 Aug 2025, at 23:41, Wei Yang wrote: > On Wed, Aug 20, 2025 at 09:40:40AM -0600, Nico Pache wrote: > [...] >>> >>>> if (!thp_vma_suitable_order(vma, address, PMD_ORDER)) >>>> return SCAN_ADDRESS_RANGE; >>>> - if (!thp_vma_allowable_order(vma, vma->vm_flags, type, PMD_ORDER)) >>>> + if (!thp_vma_allowable_orders(vma, vma->vm_flags, type, orders)) >>>> return SCAN_VMA_CHECK; >>>> /* >>>> * Anon VMA expected, the address may be unmapped then >>>> @@ -1134,7 +1135,8 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address, >>>> goto out_nolock; >>>> >>>> mmap_read_lock(mm); >>>> - result = hugepage_vma_revalidate(mm, address, true, &vma, cc); >>>> + result = hugepage_vma_revalidate(mm, address, true, &vma, cc, >>>> + BIT(HPAGE_PMD_ORDER)); >>> >>> Shouldn't this be PMD order? Seems equivalent. >> Yeah i'm actually not sure why we have both... they seem to be the >> same thing, but perhaps there is some reason for having two... > > I am confused with these two, PMD_ORDER above and HPAGE_PMD_ORDER from here. > > Do we have a guide on when to use which? Looking at the definition of HPAGE_PMD_SHIFT in huge_mm.h, it will cause a build bug when PMD level huge page is not supported. So I think HPAGE_PMD_ORDER should be used for all huge pages (both THP and hugetlb). > >>> >>>> if (result != SCAN_SUCCEED) { >>>> mmap_read_unlock(mm); >>>> goto out_nolock; >>>> @@ -1168,7 +1170,8 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address, >>>> * mmap_lock. >>>> */ >>>> mmap_write_lock(mm); >>>> - result = hugepage_vma_revalidate(mm, address, true, &vma, cc); >>>> + result = hugepage_vma_revalidate(mm, address, true, &vma, cc, >>>> + BIT(HPAGE_PMD_ORDER)); >>>> if (result != SCAN_SUCCEED) >>>> goto out_up_write; >>>> /* check if the pmd is still valid */ >>>> @@ -2807,7 +2810,7 @@ int madvise_collapse(struct vm_area_struct *vma, unsigned long start, >>>> mmap_read_lock(mm); >>>> mmap_locked = true; >>>> result = hugepage_vma_revalidate(mm, addr, false, &vma, >>>> - cc); >>>> + cc, BIT(HPAGE_PMD_ORDER)); >>>> if (result != SCAN_SUCCEED) { >>>> last_fail = result; >>>> goto out_nolock; >>>> -- >>>> 2.50.1 >>>> >>> >> > > -- > Wei Yang > Help you, Help me -- Best Regards, Yan, Zi
On Thu, Aug 21, 2025 at 10:09:10AM -0400, Zi Yan wrote: > On 20 Aug 2025, at 23:41, Wei Yang wrote: > > > On Wed, Aug 20, 2025 at 09:40:40AM -0600, Nico Pache wrote: > > [...] > >>> > >>>> if (!thp_vma_suitable_order(vma, address, PMD_ORDER)) > >>>> return SCAN_ADDRESS_RANGE; > >>>> - if (!thp_vma_allowable_order(vma, vma->vm_flags, type, PMD_ORDER)) > >>>> + if (!thp_vma_allowable_orders(vma, vma->vm_flags, type, orders)) > >>>> return SCAN_VMA_CHECK; > >>>> /* > >>>> * Anon VMA expected, the address may be unmapped then > >>>> @@ -1134,7 +1135,8 @@ static int collapse_huge_page(struct mm_struct *mm, unsigned long address, > >>>> goto out_nolock; > >>>> > >>>> mmap_read_lock(mm); > >>>> - result = hugepage_vma_revalidate(mm, address, true, &vma, cc); > >>>> + result = hugepage_vma_revalidate(mm, address, true, &vma, cc, > >>>> + BIT(HPAGE_PMD_ORDER)); > >>> > >>> Shouldn't this be PMD order? Seems equivalent. > >> Yeah i'm actually not sure why we have both... they seem to be the > >> same thing, but perhaps there is some reason for having two... > > > > I am confused with these two, PMD_ORDER above and HPAGE_PMD_ORDER from here. > > > > Do we have a guide on when to use which? > > Looking at the definition of HPAGE_PMD_SHIFT in huge_mm.h, it will cause a > build bug when PMD level huge page is not supported. So I think > HPAGE_PMD_ORDER should be used for all huge pages (both THP and hugetlb). Thanks yeah that makes sense, so let's keep it all as HPAGE_PMD_ORDER then! Cheers, Lorenzo
© 2016 - 2025 Red Hat, Inc.