mm/khugepaged.c | 14 ++++---------- 1 file changed, 4 insertions(+), 10 deletions(-)
Claim: folio_order(folio) == HPAGE_PMD_ORDER => folio->index == start.
Proof: Both loops in hpage_collapse_scan_file and collapse_file, which
iterate on the xarray, have the invariant that
start <= folio->index < start + HPAGE_PMD_NR ... (i)
A folio is always naturally aligned in the pagecache, therefore
folio_order == HPAGE_PMD_ORDER => IS_ALIGNED(folio->index, HPAGE_PMD_NR) == true ... (ii)
thp_vma_allowable_order -> thp_vma_suitable_order requires that the virtual
offsets in the VMA are aligned to the order,
=> IS_ALIGNED(start, HPAGE_PMD_NR) == true ... (iii)
Combining (i), (ii) and (iii), the claim is proven.
Therefore, remove this check.
While at it, simplify the comments.
Signed-off-by: Dev Jain <dev.jain@arm.com>
---
v1->v2:
- Remove the check instead of converting to VM_WARN_ON
- While at it, simplify the comments
Based on mm-new (8982358e1c87).
mm/khugepaged.c | 14 ++++----------
1 file changed, 4 insertions(+), 10 deletions(-)
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 5f668c1dd0fe4..b7b4680d27ab1 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -2015,9 +2015,7 @@ static enum scan_result collapse_file(struct mm_struct *mm, unsigned long addr,
* we locked the first folio, then a THP might be there already.
* This will be discovered on the first iteration.
*/
- if (folio_order(folio) == HPAGE_PMD_ORDER &&
- folio->index == start) {
- /* Maybe PMD-mapped */
+ if (folio_order(folio) == HPAGE_PMD_ORDER) {
result = SCAN_PTE_MAPPED_HUGEPAGE;
goto out_unlock;
}
@@ -2345,15 +2343,11 @@ static enum scan_result hpage_collapse_scan_file(struct mm_struct *mm,
continue;
}
- if (folio_order(folio) == HPAGE_PMD_ORDER &&
- folio->index == start) {
- /* Maybe PMD-mapped */
+ if (folio_order(folio) == HPAGE_PMD_ORDER) {
result = SCAN_PTE_MAPPED_HUGEPAGE;
/*
- * For SCAN_PTE_MAPPED_HUGEPAGE, further processing
- * by the caller won't touch the page cache, and so
- * it's safe to skip LRU and refcount checks before
- * returning.
+ * PMD-sized THP implies that we can only try
+ * retracting the PTE table.
*/
folio_put(folio);
break;
--
2.34.1
On 27/02/26 8:05 PM, Dev Jain wrote:
> Claim: folio_order(folio) == HPAGE_PMD_ORDER => folio->index == start.
>
> Proof: Both loops in hpage_collapse_scan_file and collapse_file, which
> iterate on the xarray, have the invariant that
> start <= folio->index < start + HPAGE_PMD_NR ... (i)
>
> A folio is always naturally aligned in the pagecache, therefore
> folio_order == HPAGE_PMD_ORDER => IS_ALIGNED(folio->index, HPAGE_PMD_NR) == true ... (ii)
>
> thp_vma_allowable_order -> thp_vma_suitable_order requires that the virtual
> offsets in the VMA are aligned to the order,
> => IS_ALIGNED(start, HPAGE_PMD_NR) == true ... (iii)
>
> Combining (i), (ii) and (iii), the claim is proven.
>
> Therefore, remove this check.
> While at it, simplify the comments.
>
> Signed-off-by: Dev Jain <dev.jain@arm.com>
Reviewed-by: Anshuman Khandual <anshuman.khandual@arm.com>
> ---
> v1->v2:
> - Remove the check instead of converting to VM_WARN_ON
> - While at it, simplify the comments
>
> Based on mm-new (8982358e1c87).
>
> mm/khugepaged.c | 14 ++++----------
> 1 file changed, 4 insertions(+), 10 deletions(-)
>
> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> index 5f668c1dd0fe4..b7b4680d27ab1 100644
> --- a/mm/khugepaged.c
> +++ b/mm/khugepaged.c
> @@ -2015,9 +2015,7 @@ static enum scan_result collapse_file(struct mm_struct *mm, unsigned long addr,
> * we locked the first folio, then a THP might be there already.
> * This will be discovered on the first iteration.
> */
> - if (folio_order(folio) == HPAGE_PMD_ORDER &&
> - folio->index == start) {
> - /* Maybe PMD-mapped */
> + if (folio_order(folio) == HPAGE_PMD_ORDER) {
> result = SCAN_PTE_MAPPED_HUGEPAGE;
> goto out_unlock;
> }
> @@ -2345,15 +2343,11 @@ static enum scan_result hpage_collapse_scan_file(struct mm_struct *mm,
> continue;
> }
>
> - if (folio_order(folio) == HPAGE_PMD_ORDER &&
> - folio->index == start) {
> - /* Maybe PMD-mapped */
> + if (folio_order(folio) == HPAGE_PMD_ORDER) {
> result = SCAN_PTE_MAPPED_HUGEPAGE;
> /*
> - * For SCAN_PTE_MAPPED_HUGEPAGE, further processing
> - * by the caller won't touch the page cache, and so
> - * it's safe to skip LRU and refcount checks before
> - * returning.
> + * PMD-sized THP implies that we can only try
> + * retracting the PTE table.
> */
> folio_put(folio);
> break;
On Fri, Feb 27, 2026 at 08:05:01PM +0530, Dev Jain wrote: >Claim: folio_order(folio) == HPAGE_PMD_ORDER => folio->index == start. > >Proof: Both loops in hpage_collapse_scan_file and collapse_file, which >iterate on the xarray, have the invariant that >start <= folio->index < start + HPAGE_PMD_NR ... (i) > >A folio is always naturally aligned in the pagecache, therefore >folio_order == HPAGE_PMD_ORDER => IS_ALIGNED(folio->index, HPAGE_PMD_NR) == true ... (ii) This is because __filemap_add_folio() align the index to folio_order(), right? > >thp_vma_allowable_order -> thp_vma_suitable_order requires that the virtual >offsets in the VMA are aligned to the order, >=> IS_ALIGNED(start, HPAGE_PMD_NR) == true ... (iii) > >Combining (i), (ii) and (iii), the claim is proven. > >Therefore, remove this check. >While at it, simplify the comments. > >Signed-off-by: Dev Jain <dev.jain@arm.com> -- Wei Yang Help you, Help me
On 04/03/26 1:57 pm, Wei Yang wrote: > On Fri, Feb 27, 2026 at 08:05:01PM +0530, Dev Jain wrote: >> Claim: folio_order(folio) == HPAGE_PMD_ORDER => folio->index == start. >> >> Proof: Both loops in hpage_collapse_scan_file and collapse_file, which >> iterate on the xarray, have the invariant that >> start <= folio->index < start + HPAGE_PMD_NR ... (i) >> >> A folio is always naturally aligned in the pagecache, therefore >> folio_order == HPAGE_PMD_ORDER => IS_ALIGNED(folio->index, HPAGE_PMD_NR) == true ... (ii) > > This is because __filemap_add_folio() align the index to folio_order(), right? No, see code around instances of mapping_align_index(). We retrieve the max order from the index, not the other way around. We already are given the index to put the folio at, by the caller, so have to construct the order from that. > >> >> thp_vma_allowable_order -> thp_vma_suitable_order requires that the virtual >> offsets in the VMA are aligned to the order, >> => IS_ALIGNED(start, HPAGE_PMD_NR) == true ... (iii) >> >> Combining (i), (ii) and (iii), the claim is proven. >> >> Therefore, remove this check. >> While at it, simplify the comments. >> >> Signed-off-by: Dev Jain <dev.jain@arm.com> >
On Fri, Feb 27, 2026 at 08:05:01PM +0530, Dev Jain wrote:
> Claim: folio_order(folio) == HPAGE_PMD_ORDER => folio->index == start.
>
> Proof: Both loops in hpage_collapse_scan_file and collapse_file, which
> iterate on the xarray, have the invariant that
> start <= folio->index < start + HPAGE_PMD_NR ... (i)
>
> A folio is always naturally aligned in the pagecache, therefore
> folio_order == HPAGE_PMD_ORDER => IS_ALIGNED(folio->index, HPAGE_PMD_NR) == true ... (ii)
>
> thp_vma_allowable_order -> thp_vma_suitable_order requires that the virtual
> offsets in the VMA are aligned to the order,
> => IS_ALIGNED(start, HPAGE_PMD_NR) == true ... (iii)
>
> Combining (i), (ii) and (iii), the claim is proven.
>
> Therefore, remove this check.
> While at it, simplify the comments.
>
> Signed-off-by: Dev Jain <dev.jain@arm.com>
Very mathematical :)
LGTM so:
Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com>
> ---
> v1->v2:
> - Remove the check instead of converting to VM_WARN_ON
> - While at it, simplify the comments
>
> Based on mm-new (8982358e1c87).
>
> mm/khugepaged.c | 14 ++++----------
> 1 file changed, 4 insertions(+), 10 deletions(-)
>
> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> index 5f668c1dd0fe4..b7b4680d27ab1 100644
> --- a/mm/khugepaged.c
> +++ b/mm/khugepaged.c
> @@ -2015,9 +2015,7 @@ static enum scan_result collapse_file(struct mm_struct *mm, unsigned long addr,
> * we locked the first folio, then a THP might be there already.
> * This will be discovered on the first iteration.
> */
> - if (folio_order(folio) == HPAGE_PMD_ORDER &&
> - folio->index == start) {
> - /* Maybe PMD-mapped */
> + if (folio_order(folio) == HPAGE_PMD_ORDER) {
> result = SCAN_PTE_MAPPED_HUGEPAGE;
> goto out_unlock;
> }
> @@ -2345,15 +2343,11 @@ static enum scan_result hpage_collapse_scan_file(struct mm_struct *mm,
> continue;
> }
>
> - if (folio_order(folio) == HPAGE_PMD_ORDER &&
> - folio->index == start) {
> - /* Maybe PMD-mapped */
> + if (folio_order(folio) == HPAGE_PMD_ORDER) {
> result = SCAN_PTE_MAPPED_HUGEPAGE;
> /*
> - * For SCAN_PTE_MAPPED_HUGEPAGE, further processing
> - * by the caller won't touch the page cache, and so
> - * it's safe to skip LRU and refcount checks before
> - * returning.
> + * PMD-sized THP implies that we can only try
> + * retracting the PTE table.
> */
> folio_put(folio);
> break;
> --
> 2.34.1
>
On 2/27/26 10:35 PM, Dev Jain wrote: > Claim: folio_order(folio) == HPAGE_PMD_ORDER => folio->index == start. > > Proof: Both loops in hpage_collapse_scan_file and collapse_file, which > iterate on the xarray, have the invariant that > start <= folio->index < start + HPAGE_PMD_NR ... (i) > > A folio is always naturally aligned in the pagecache, therefore > folio_order == HPAGE_PMD_ORDER => IS_ALIGNED(folio->index, HPAGE_PMD_NR) == true ... (ii) > > thp_vma_allowable_order -> thp_vma_suitable_order requires that the virtual > offsets in the VMA are aligned to the order, > => IS_ALIGNED(start, HPAGE_PMD_NR) == true ... (iii) > > Combining (i), (ii) and (iii), the claim is proven. > > Therefore, remove this check. > While at it, simplify the comments. > > Signed-off-by: Dev Jain <dev.jain@arm.com> > --- Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com>
On 2026/2/27 22:35, Dev Jain wrote:
> Claim: folio_order(folio) == HPAGE_PMD_ORDER => folio->index == start.
>
> Proof: Both loops in hpage_collapse_scan_file and collapse_file, which
> iterate on the xarray, have the invariant that
> start <= folio->index < start + HPAGE_PMD_NR ... (i)
>
> A folio is always naturally aligned in the pagecache, therefore
> folio_order == HPAGE_PMD_ORDER => IS_ALIGNED(folio->index, HPAGE_PMD_NR) == true ... (ii)
>
> thp_vma_allowable_order -> thp_vma_suitable_order requires that the virtual
> offsets in the VMA are aligned to the order,
> => IS_ALIGNED(start, HPAGE_PMD_NR) == true ... (iii)
>
> Combining (i), (ii) and (iii), the claim is proven.
>
> Therefore, remove this check.
> While at it, simplify the comments.
>
> Signed-off-by: Dev Jain <dev.jain@arm.com>
> ---
> v1->v2:
> - Remove the check instead of converting to VM_WARN_ON
> - While at it, simplify the comments
>
> Based on mm-new (8982358e1c87).
>
> mm/khugepaged.c | 14 ++++----------
> 1 file changed, 4 insertions(+), 10 deletions(-)
>
> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> index 5f668c1dd0fe4..b7b4680d27ab1 100644
> --- a/mm/khugepaged.c
> +++ b/mm/khugepaged.c
> @@ -2015,9 +2015,7 @@ static enum scan_result collapse_file(struct mm_struct *mm, unsigned long addr,
> * we locked the first folio, then a THP might be there already.
> * This will be discovered on the first iteration.
> */
> - if (folio_order(folio) == HPAGE_PMD_ORDER &&
> - folio->index == start) {
> - /* Maybe PMD-mapped */
> + if (folio_order(folio) == HPAGE_PMD_ORDER) {
> result = SCAN_PTE_MAPPED_HUGEPAGE;
> goto out_unlock;
> }
> @@ -2345,15 +2343,11 @@ static enum scan_result hpage_collapse_scan_file(struct mm_struct *mm,
> continue;
> }
>
> - if (folio_order(folio) == HPAGE_PMD_ORDER &&
> - folio->index == start) {
> - /* Maybe PMD-mapped */
> + if (folio_order(folio) == HPAGE_PMD_ORDER) {
> result = SCAN_PTE_MAPPED_HUGEPAGE;
> /*
> - * For SCAN_PTE_MAPPED_HUGEPAGE, further processing
> - * by the caller won't touch the page cache, and so
> - * it's safe to skip LRU and refcount checks before
> - * returning.
> + * PMD-sized THP implies that we can only try
> + * retracting the PTE table.
> */
> folio_put(folio);
> break;
LGTM!
The proof is sound, the combination of the loop invariant, natural
alignment, and VMA alignment requirements indeed makes the index
check redundant :D
Reviewed-by: Lance Yang <lance.yang@linux.dev>
On 2/27/26 15:35, Dev Jain wrote: > Claim: folio_order(folio) == HPAGE_PMD_ORDER => folio->index == start. > > Proof: Both loops in hpage_collapse_scan_file and collapse_file, which > iterate on the xarray, have the invariant that > start <= folio->index < start + HPAGE_PMD_NR ... (i) > > A folio is always naturally aligned in the pagecache, therefore > folio_order == HPAGE_PMD_ORDER => IS_ALIGNED(folio->index, HPAGE_PMD_NR) == true ... (ii) > > thp_vma_allowable_order -> thp_vma_suitable_order requires that the virtual > offsets in the VMA are aligned to the order, > => IS_ALIGNED(start, HPAGE_PMD_NR) == true ... (iii) > > Combining (i), (ii) and (iii), the claim is proven. > > Therefore, remove this check. > While at it, simplify the comments. > > Signed-off-by: Dev Jain <dev.jain@arm.com> > --- There might be some conflict with Nicos changes. Acked-by: David Hildenbrand (Arm) <david@kernel.org> -- Cheers, David
© 2016 - 2026 Red Hat, Inc.