[PATCH v4 3/5] arm64: mm: support batch clearing of the young flag for large folios

Baolin Wang posted 5 patches 1 month, 2 weeks ago
There is a newer version of this series
[PATCH v4 3/5] arm64: mm: support batch clearing of the young flag for large folios
Posted by Baolin Wang 1 month, 2 weeks ago
Currently, contpte_ptep_test_and_clear_young() and contpte_ptep_clear_flush_young()
only clear the young flag and flush TLBs for PTEs within the contiguous range.
To support batch PTE operations for other sized large folios in the following
patches, adding a new parameter to specify the number of PTEs that map consecutive
pages of the same large folio in a single VMA and a single page table.

While we are at it, rename the functions to maintain consistency with other
contpte_*() functions.

Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
---
 arch/arm64/include/asm/pgtable.h | 12 ++++++------
 arch/arm64/mm/contpte.c          | 33 ++++++++++++++++++--------------
 2 files changed, 25 insertions(+), 20 deletions(-)

diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
index 445e18e92221..d5fbe72e820a 100644
--- a/arch/arm64/include/asm/pgtable.h
+++ b/arch/arm64/include/asm/pgtable.h
@@ -1648,10 +1648,10 @@ extern void contpte_clear_full_ptes(struct mm_struct *mm, unsigned long addr,
 extern pte_t contpte_get_and_clear_full_ptes(struct mm_struct *mm,
 				unsigned long addr, pte_t *ptep,
 				unsigned int nr, int full);
-extern int contpte_ptep_test_and_clear_young(struct vm_area_struct *vma,
-				unsigned long addr, pte_t *ptep);
-extern int contpte_ptep_clear_flush_young(struct vm_area_struct *vma,
-				unsigned long addr, pte_t *ptep);
+int contpte_test_and_clear_young_ptes(struct vm_area_struct *vma,
+				unsigned long addr, pte_t *ptep, unsigned int nr);
+int contpte_clear_flush_young_ptes(struct vm_area_struct *vma,
+				unsigned long addr, pte_t *ptep, unsigned int nr);
 extern void contpte_wrprotect_ptes(struct mm_struct *mm, unsigned long addr,
 				pte_t *ptep, unsigned int nr);
 extern int contpte_ptep_set_access_flags(struct vm_area_struct *vma,
@@ -1823,7 +1823,7 @@ static inline int ptep_test_and_clear_young(struct vm_area_struct *vma,
 	if (likely(!pte_valid_cont(orig_pte)))
 		return __ptep_test_and_clear_young(vma, addr, ptep);
 
-	return contpte_ptep_test_and_clear_young(vma, addr, ptep);
+	return contpte_test_and_clear_young_ptes(vma, addr, ptep, CONT_PTES);
 }
 
 #define __HAVE_ARCH_PTEP_CLEAR_YOUNG_FLUSH
@@ -1835,7 +1835,7 @@ static inline int ptep_clear_flush_young(struct vm_area_struct *vma,
 	if (likely(!pte_valid_cont(orig_pte)))
 		return __ptep_clear_flush_young(vma, addr, ptep);
 
-	return contpte_ptep_clear_flush_young(vma, addr, ptep);
+	return contpte_clear_flush_young_ptes(vma, addr, ptep, CONT_PTES);
 }
 
 #define wrprotect_ptes wrprotect_ptes
diff --git a/arch/arm64/mm/contpte.c b/arch/arm64/mm/contpte.c
index e4ddeb46f25d..b929a455103f 100644
--- a/arch/arm64/mm/contpte.c
+++ b/arch/arm64/mm/contpte.c
@@ -508,8 +508,9 @@ pte_t contpte_get_and_clear_full_ptes(struct mm_struct *mm,
 }
 EXPORT_SYMBOL_GPL(contpte_get_and_clear_full_ptes);
 
-int contpte_ptep_test_and_clear_young(struct vm_area_struct *vma,
-					unsigned long addr, pte_t *ptep)
+int contpte_test_and_clear_young_ptes(struct vm_area_struct *vma,
+					unsigned long addr, pte_t *ptep,
+					unsigned int nr)
 {
 	/*
 	 * ptep_clear_flush_young() technically requires us to clear the access
@@ -518,41 +519,45 @@ int contpte_ptep_test_and_clear_young(struct vm_area_struct *vma,
 	 * contig range when the range is covered by a single folio, we can get
 	 * away with clearing young for the whole contig range here, so we avoid
 	 * having to unfold.
+	 *
+	 * The 'nr' means consecutive (present) PTEs that map consecutive pages
+	 * of the same large folio in a single VMA and a single page table.
 	 */
 
+	unsigned long end = addr + nr * PAGE_SIZE;
 	int young = 0;
-	int i;
 
-	ptep = contpte_align_down(ptep);
-	addr = ALIGN_DOWN(addr, CONT_PTE_SIZE);
-
-	for (i = 0; i < CONT_PTES; i++, ptep++, addr += PAGE_SIZE)
+	ptep = contpte_align_addr_ptep(&addr, &end, ptep, nr);
+	for (; addr != end; ptep++, addr += PAGE_SIZE)
 		young |= __ptep_test_and_clear_young(vma, addr, ptep);
 
 	return young;
 }
-EXPORT_SYMBOL_GPL(contpte_ptep_test_and_clear_young);
+EXPORT_SYMBOL_GPL(contpte_test_and_clear_young_ptes);
 
-int contpte_ptep_clear_flush_young(struct vm_area_struct *vma,
-					unsigned long addr, pte_t *ptep)
+int contpte_clear_flush_young_ptes(struct vm_area_struct *vma,
+				unsigned long addr, pte_t *ptep,
+				unsigned int nr)
 {
 	int young;
 
-	young = contpte_ptep_test_and_clear_young(vma, addr, ptep);
+	young = contpte_test_and_clear_young_ptes(vma, addr, ptep, nr);
 
 	if (young) {
+		unsigned long end = addr + nr * PAGE_SIZE;
+
+		contpte_align_addr_ptep(&addr, &end, ptep, nr);
 		/*
 		 * See comment in __ptep_clear_flush_young(); same rationale for
 		 * eliding the trailing DSB applies here.
 		 */
-		addr = ALIGN_DOWN(addr, CONT_PTE_SIZE);
-		__flush_tlb_range_nosync(vma->vm_mm, addr, addr + CONT_PTE_SIZE,
+		__flush_tlb_range_nosync(vma->vm_mm, addr, end,
 					 PAGE_SIZE, true, 3);
 	}
 
 	return young;
 }
-EXPORT_SYMBOL_GPL(contpte_ptep_clear_flush_young);
+EXPORT_SYMBOL_GPL(contpte_clear_flush_young_ptes);
 
 void contpte_wrprotect_ptes(struct mm_struct *mm, unsigned long addr,
 					pte_t *ptep, unsigned int nr)
-- 
2.47.3
Re: [PATCH v4 3/5] arm64: mm: support batch clearing of the young flag for large folios
Posted by Ryan Roberts 1 month, 2 weeks ago
On 23/12/2025 05:48, Baolin Wang wrote:
> Currently, contpte_ptep_test_and_clear_young() and contpte_ptep_clear_flush_young()
> only clear the young flag and flush TLBs for PTEs within the contiguous range.
> To support batch PTE operations for other sized large folios in the following
> patches, adding a new parameter to specify the number of PTEs that map consecutive
> pages of the same large folio in a single VMA and a single page table.
> 
> While we are at it, rename the functions to maintain consistency with other
> contpte_*() functions.
> 
> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
> ---
>  arch/arm64/include/asm/pgtable.h | 12 ++++++------
>  arch/arm64/mm/contpte.c          | 33 ++++++++++++++++++--------------
>  2 files changed, 25 insertions(+), 20 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
> index 445e18e92221..d5fbe72e820a 100644
> --- a/arch/arm64/include/asm/pgtable.h
> +++ b/arch/arm64/include/asm/pgtable.h
> @@ -1648,10 +1648,10 @@ extern void contpte_clear_full_ptes(struct mm_struct *mm, unsigned long addr,
>  extern pte_t contpte_get_and_clear_full_ptes(struct mm_struct *mm,
>  				unsigned long addr, pte_t *ptep,
>  				unsigned int nr, int full);
> -extern int contpte_ptep_test_and_clear_young(struct vm_area_struct *vma,
> -				unsigned long addr, pte_t *ptep);
> -extern int contpte_ptep_clear_flush_young(struct vm_area_struct *vma,
> -				unsigned long addr, pte_t *ptep);
> +int contpte_test_and_clear_young_ptes(struct vm_area_struct *vma,
> +				unsigned long addr, pte_t *ptep, unsigned int nr);
> +int contpte_clear_flush_young_ptes(struct vm_area_struct *vma,
> +				unsigned long addr, pte_t *ptep, unsigned int nr);
>  extern void contpte_wrprotect_ptes(struct mm_struct *mm, unsigned long addr,
>  				pte_t *ptep, unsigned int nr);
>  extern int contpte_ptep_set_access_flags(struct vm_area_struct *vma,
> @@ -1823,7 +1823,7 @@ static inline int ptep_test_and_clear_young(struct vm_area_struct *vma,
>  	if (likely(!pte_valid_cont(orig_pte)))
>  		return __ptep_test_and_clear_young(vma, addr, ptep);
>  
> -	return contpte_ptep_test_and_clear_young(vma, addr, ptep);
> +	return contpte_test_and_clear_young_ptes(vma, addr, ptep, CONT_PTES);

As per your fixup patch, I agree that nr should be 1 here, not CONT_PTES.

>  }
>  
>  #define __HAVE_ARCH_PTEP_CLEAR_YOUNG_FLUSH
> @@ -1835,7 +1835,7 @@ static inline int ptep_clear_flush_young(struct vm_area_struct *vma,
>  	if (likely(!pte_valid_cont(orig_pte)))
>  		return __ptep_clear_flush_young(vma, addr, ptep);
>  
> -	return contpte_ptep_clear_flush_young(vma, addr, ptep);
> +	return contpte_clear_flush_young_ptes(vma, addr, ptep, CONT_PTES);

And same here.

>  }
>  
>  #define wrprotect_ptes wrprotect_ptes
> diff --git a/arch/arm64/mm/contpte.c b/arch/arm64/mm/contpte.c
> index e4ddeb46f25d..b929a455103f 100644
> --- a/arch/arm64/mm/contpte.c
> +++ b/arch/arm64/mm/contpte.c
> @@ -508,8 +508,9 @@ pte_t contpte_get_and_clear_full_ptes(struct mm_struct *mm,
>  }
>  EXPORT_SYMBOL_GPL(contpte_get_and_clear_full_ptes);
>  
> -int contpte_ptep_test_and_clear_young(struct vm_area_struct *vma,
> -					unsigned long addr, pte_t *ptep)
> +int contpte_test_and_clear_young_ptes(struct vm_area_struct *vma,
> +					unsigned long addr, pte_t *ptep,
> +					unsigned int nr)
>  {
>  	/*
>  	 * ptep_clear_flush_young() technically requires us to clear the access
> @@ -518,41 +519,45 @@ int contpte_ptep_test_and_clear_young(struct vm_area_struct *vma,
>  	 * contig range when the range is covered by a single folio, we can get
>  	 * away with clearing young for the whole contig range here, so we avoid
>  	 * having to unfold.
> +	 *
> +	 * The 'nr' means consecutive (present) PTEs that map consecutive pages
> +	 * of the same large folio in a single VMA and a single page table.
>  	 */
>  
> +	unsigned long end = addr + nr * PAGE_SIZE;
>  	int young = 0;
> -	int i;
>  
> -	ptep = contpte_align_down(ptep);
> -	addr = ALIGN_DOWN(addr, CONT_PTE_SIZE);
> -
> -	for (i = 0; i < CONT_PTES; i++, ptep++, addr += PAGE_SIZE)
> +	ptep = contpte_align_addr_ptep(&addr, &end, ptep, nr);
> +	for (; addr != end; ptep++, addr += PAGE_SIZE)
>  		young |= __ptep_test_and_clear_young(vma, addr, ptep);
>  
>  	return young;
>  }
> -EXPORT_SYMBOL_GPL(contpte_ptep_test_and_clear_young);
> +EXPORT_SYMBOL_GPL(contpte_test_and_clear_young_ptes);
>  
> -int contpte_ptep_clear_flush_young(struct vm_area_struct *vma,
> -					unsigned long addr, pte_t *ptep)
> +int contpte_clear_flush_young_ptes(struct vm_area_struct *vma,
> +				unsigned long addr, pte_t *ptep,
> +				unsigned int nr)
>  {
>  	int young;
>  
> -	young = contpte_ptep_test_and_clear_young(vma, addr, ptep);
> +	young = contpte_test_and_clear_young_ptes(vma, addr, ptep, nr);
>  
>  	if (young) {
> +		unsigned long end = addr + nr * PAGE_SIZE;
> +
> +		contpte_align_addr_ptep(&addr, &end, ptep, nr);
>  		/*
>  		 * See comment in __ptep_clear_flush_young(); same rationale for
>  		 * eliding the trailing DSB applies here.
>  		 */
> -		addr = ALIGN_DOWN(addr, CONT_PTE_SIZE);
> -		__flush_tlb_range_nosync(vma->vm_mm, addr, addr + CONT_PTE_SIZE,
> +		__flush_tlb_range_nosync(vma->vm_mm, addr, end,
>  					 PAGE_SIZE, true, 3);

Hmm... The requirement is that we must flush the _page_ if clearing access for a
pte that does not have the contiguous bit set, or we must flush the _contpte
block_ if clearing access for a pte that does have the contiguous bit set.

With your changes, you may call for a large range that covers multiple contpte
blocks but only has a single pte in a single contpte block for which the access
bit was previously set. But that will cause flushing the TLB for the full range.
Could this cause a performance issue? Yes, no, maybe... I think it's unlikely
but I wouldn't rule it out in some edge case.

I wonder if it's better to track the sub-ranges where access was cleared and
only issue tlbi for those sub-ranges? Probably just keep it simple (the way you
have done it) until/unless we see an actual problem?

Thanks,
Ryan

>  	}
>  
>  	return young;
>  }
> -EXPORT_SYMBOL_GPL(contpte_ptep_clear_flush_young);
> +EXPORT_SYMBOL_GPL(contpte_clear_flush_young_ptes);
>  
>  void contpte_wrprotect_ptes(struct mm_struct *mm, unsigned long addr,
>  					pte_t *ptep, unsigned int nr)
Re: [PATCH v4 3/5] arm64: mm: support batch clearing of the young flag for large folios
Posted by Baolin Wang 1 month, 2 weeks ago

On 2025/12/24 22:07, Ryan Roberts wrote:
> On 23/12/2025 05:48, Baolin Wang wrote:
>> Currently, contpte_ptep_test_and_clear_young() and contpte_ptep_clear_flush_young()
>> only clear the young flag and flush TLBs for PTEs within the contiguous range.
>> To support batch PTE operations for other sized large folios in the following
>> patches, adding a new parameter to specify the number of PTEs that map consecutive
>> pages of the same large folio in a single VMA and a single page table.
>>
>> While we are at it, rename the functions to maintain consistency with other
>> contpte_*() functions.
>>
>> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
>> ---
>>   arch/arm64/include/asm/pgtable.h | 12 ++++++------
>>   arch/arm64/mm/contpte.c          | 33 ++++++++++++++++++--------------
>>   2 files changed, 25 insertions(+), 20 deletions(-)
>>
>> diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
>> index 445e18e92221..d5fbe72e820a 100644
>> --- a/arch/arm64/include/asm/pgtable.h
>> +++ b/arch/arm64/include/asm/pgtable.h
>> @@ -1648,10 +1648,10 @@ extern void contpte_clear_full_ptes(struct mm_struct *mm, unsigned long addr,
>>   extern pte_t contpte_get_and_clear_full_ptes(struct mm_struct *mm,
>>   				unsigned long addr, pte_t *ptep,
>>   				unsigned int nr, int full);
>> -extern int contpte_ptep_test_and_clear_young(struct vm_area_struct *vma,
>> -				unsigned long addr, pte_t *ptep);
>> -extern int contpte_ptep_clear_flush_young(struct vm_area_struct *vma,
>> -				unsigned long addr, pte_t *ptep);
>> +int contpte_test_and_clear_young_ptes(struct vm_area_struct *vma,
>> +				unsigned long addr, pte_t *ptep, unsigned int nr);
>> +int contpte_clear_flush_young_ptes(struct vm_area_struct *vma,
>> +				unsigned long addr, pte_t *ptep, unsigned int nr);
>>   extern void contpte_wrprotect_ptes(struct mm_struct *mm, unsigned long addr,
>>   				pte_t *ptep, unsigned int nr);
>>   extern int contpte_ptep_set_access_flags(struct vm_area_struct *vma,
>> @@ -1823,7 +1823,7 @@ static inline int ptep_test_and_clear_young(struct vm_area_struct *vma,
>>   	if (likely(!pte_valid_cont(orig_pte)))
>>   		return __ptep_test_and_clear_young(vma, addr, ptep);
>>   
>> -	return contpte_ptep_test_and_clear_young(vma, addr, ptep);
>> +	return contpte_test_and_clear_young_ptes(vma, addr, ptep, CONT_PTES);
> 
> As per your fixup patch, I agree that nr should be 1 here, not CONT_PTES.

Yes.

>>   }
>>   
>>   #define __HAVE_ARCH_PTEP_CLEAR_YOUNG_FLUSH
>> @@ -1835,7 +1835,7 @@ static inline int ptep_clear_flush_young(struct vm_area_struct *vma,
>>   	if (likely(!pte_valid_cont(orig_pte)))
>>   		return __ptep_clear_flush_young(vma, addr, ptep);
>>   
>> -	return contpte_ptep_clear_flush_young(vma, addr, ptep);
>> +	return contpte_clear_flush_young_ptes(vma, addr, ptep, CONT_PTES);
> 
> And same here.
> 
>>   }
>>   
>>   #define wrprotect_ptes wrprotect_ptes
>> diff --git a/arch/arm64/mm/contpte.c b/arch/arm64/mm/contpte.c
>> index e4ddeb46f25d..b929a455103f 100644
>> --- a/arch/arm64/mm/contpte.c
>> +++ b/arch/arm64/mm/contpte.c
>> @@ -508,8 +508,9 @@ pte_t contpte_get_and_clear_full_ptes(struct mm_struct *mm,
>>   }
>>   EXPORT_SYMBOL_GPL(contpte_get_and_clear_full_ptes);
>>   
>> -int contpte_ptep_test_and_clear_young(struct vm_area_struct *vma,
>> -					unsigned long addr, pte_t *ptep)
>> +int contpte_test_and_clear_young_ptes(struct vm_area_struct *vma,
>> +					unsigned long addr, pte_t *ptep,
>> +					unsigned int nr)
>>   {
>>   	/*
>>   	 * ptep_clear_flush_young() technically requires us to clear the access
>> @@ -518,41 +519,45 @@ int contpte_ptep_test_and_clear_young(struct vm_area_struct *vma,
>>   	 * contig range when the range is covered by a single folio, we can get
>>   	 * away with clearing young for the whole contig range here, so we avoid
>>   	 * having to unfold.
>> +	 *
>> +	 * The 'nr' means consecutive (present) PTEs that map consecutive pages
>> +	 * of the same large folio in a single VMA and a single page table.
>>   	 */
>>   
>> +	unsigned long end = addr + nr * PAGE_SIZE;
>>   	int young = 0;
>> -	int i;
>>   
>> -	ptep = contpte_align_down(ptep);
>> -	addr = ALIGN_DOWN(addr, CONT_PTE_SIZE);
>> -
>> -	for (i = 0; i < CONT_PTES; i++, ptep++, addr += PAGE_SIZE)
>> +	ptep = contpte_align_addr_ptep(&addr, &end, ptep, nr);
>> +	for (; addr != end; ptep++, addr += PAGE_SIZE)
>>   		young |= __ptep_test_and_clear_young(vma, addr, ptep);
>>   
>>   	return young;
>>   }
>> -EXPORT_SYMBOL_GPL(contpte_ptep_test_and_clear_young);
>> +EXPORT_SYMBOL_GPL(contpte_test_and_clear_young_ptes);
>>   
>> -int contpte_ptep_clear_flush_young(struct vm_area_struct *vma,
>> -					unsigned long addr, pte_t *ptep)
>> +int contpte_clear_flush_young_ptes(struct vm_area_struct *vma,
>> +				unsigned long addr, pte_t *ptep,
>> +				unsigned int nr)
>>   {
>>   	int young;
>>   
>> -	young = contpte_ptep_test_and_clear_young(vma, addr, ptep);
>> +	young = contpte_test_and_clear_young_ptes(vma, addr, ptep, nr);
>>   
>>   	if (young) {
>> +		unsigned long end = addr + nr * PAGE_SIZE;
>> +
>> +		contpte_align_addr_ptep(&addr, &end, ptep, nr);
>>   		/*
>>   		 * See comment in __ptep_clear_flush_young(); same rationale for
>>   		 * eliding the trailing DSB applies here.
>>   		 */
>> -		addr = ALIGN_DOWN(addr, CONT_PTE_SIZE);
>> -		__flush_tlb_range_nosync(vma->vm_mm, addr, addr + CONT_PTE_SIZE,
>> +		__flush_tlb_range_nosync(vma->vm_mm, addr, end,
>>   					 PAGE_SIZE, true, 3);
> 
> Hmm... The requirement is that we must flush the _page_ if clearing access for a
> pte that does not have the contiguous bit set, or we must flush the _contpte
> block_ if clearing access for a pte that does have the contiguous bit set.
> 
> With your changes, you may call for a large range that covers multiple contpte
> blocks but only has a single pte in a single contpte block for which the access
> bit was previously set. But that will cause flushing the TLB for the full range.
> Could this cause a performance issue? Yes, no, maybe... I think it's unlikely
> but I wouldn't rule it out in some edge case.
> 
> I wonder if it's better to track the sub-ranges where access was cleared and
> only issue tlbi for those sub-ranges? Probably just keep it simple (the way you
> have done it) until/unless we see an actual problem?

Good question. Indeed, as you said, we flush the TLB per folio now, 
which might increase the flush range. However, I think this approach is 
relatively reasonable for now.

First, the mm-core also tracks the access status per folio, and it's 
really unnecessary to add excessive complexity to track the access 
status of sub-pages (or sub-ranges). I can already imagine that tracking 
the access status for each cont-block range as well as for non-cont 
pages across the entire large folio range, which can be too complicated.

Second, __flush_tlb_range_nosync() is a lightweight flush. I quickly ran 
a measurement on my machine and found that the overhead of 
__flush_tlb_range_nosync() barely changes between nr=16 and nr=256 (both 
are around 40 ns).

Therefore, I would still prefer to keep the logic here simple.
Re: [PATCH v4 3/5] arm64: mm: support batch clearing of the young flag for large folios
Posted by Ryan Roberts 1 month ago
On 25/12/2025 02:48, Baolin Wang wrote:
> 
> 
> On 2025/12/24 22:07, Ryan Roberts wrote:
>> On 23/12/2025 05:48, Baolin Wang wrote:
>>> Currently, contpte_ptep_test_and_clear_young() and
>>> contpte_ptep_clear_flush_young()
>>> only clear the young flag and flush TLBs for PTEs within the contiguous range.
>>> To support batch PTE operations for other sized large folios in the following
>>> patches, adding a new parameter to specify the number of PTEs that map
>>> consecutive
>>> pages of the same large folio in a single VMA and a single page table.
>>>
>>> While we are at it, rename the functions to maintain consistency with other
>>> contpte_*() functions.
>>>
>>> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
>>> ---
>>>   arch/arm64/include/asm/pgtable.h | 12 ++++++------
>>>   arch/arm64/mm/contpte.c          | 33 ++++++++++++++++++--------------
>>>   2 files changed, 25 insertions(+), 20 deletions(-)
>>>
>>> diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
>>> index 445e18e92221..d5fbe72e820a 100644
>>> --- a/arch/arm64/include/asm/pgtable.h
>>> +++ b/arch/arm64/include/asm/pgtable.h
>>> @@ -1648,10 +1648,10 @@ extern void contpte_clear_full_ptes(struct mm_struct
>>> *mm, unsigned long addr,
>>>   extern pte_t contpte_get_and_clear_full_ptes(struct mm_struct *mm,
>>>                   unsigned long addr, pte_t *ptep,
>>>                   unsigned int nr, int full);
>>> -extern int contpte_ptep_test_and_clear_young(struct vm_area_struct *vma,
>>> -                unsigned long addr, pte_t *ptep);
>>> -extern int contpte_ptep_clear_flush_young(struct vm_area_struct *vma,
>>> -                unsigned long addr, pte_t *ptep);
>>> +int contpte_test_and_clear_young_ptes(struct vm_area_struct *vma,
>>> +                unsigned long addr, pte_t *ptep, unsigned int nr);
>>> +int contpte_clear_flush_young_ptes(struct vm_area_struct *vma,
>>> +                unsigned long addr, pte_t *ptep, unsigned int nr);
>>>   extern void contpte_wrprotect_ptes(struct mm_struct *mm, unsigned long addr,
>>>                   pte_t *ptep, unsigned int nr);
>>>   extern int contpte_ptep_set_access_flags(struct vm_area_struct *vma,
>>> @@ -1823,7 +1823,7 @@ static inline int ptep_test_and_clear_young(struct
>>> vm_area_struct *vma,
>>>       if (likely(!pte_valid_cont(orig_pte)))
>>>           return __ptep_test_and_clear_young(vma, addr, ptep);
>>>   -    return contpte_ptep_test_and_clear_young(vma, addr, ptep);
>>> +    return contpte_test_and_clear_young_ptes(vma, addr, ptep, CONT_PTES);
>>
>> As per your fixup patch, I agree that nr should be 1 here, not CONT_PTES.
> 
> Yes.
> 
>>>   }
>>>     #define __HAVE_ARCH_PTEP_CLEAR_YOUNG_FLUSH
>>> @@ -1835,7 +1835,7 @@ static inline int ptep_clear_flush_young(struct
>>> vm_area_struct *vma,
>>>       if (likely(!pte_valid_cont(orig_pte)))
>>>           return __ptep_clear_flush_young(vma, addr, ptep);
>>>   -    return contpte_ptep_clear_flush_young(vma, addr, ptep);
>>> +    return contpte_clear_flush_young_ptes(vma, addr, ptep, CONT_PTES);
>>
>> And same here.
>>
>>>   }
>>>     #define wrprotect_ptes wrprotect_ptes
>>> diff --git a/arch/arm64/mm/contpte.c b/arch/arm64/mm/contpte.c
>>> index e4ddeb46f25d..b929a455103f 100644
>>> --- a/arch/arm64/mm/contpte.c
>>> +++ b/arch/arm64/mm/contpte.c
>>> @@ -508,8 +508,9 @@ pte_t contpte_get_and_clear_full_ptes(struct mm_struct *mm,
>>>   }
>>>   EXPORT_SYMBOL_GPL(contpte_get_and_clear_full_ptes);
>>>   -int contpte_ptep_test_and_clear_young(struct vm_area_struct *vma,
>>> -                    unsigned long addr, pte_t *ptep)
>>> +int contpte_test_and_clear_young_ptes(struct vm_area_struct *vma,
>>> +                    unsigned long addr, pte_t *ptep,
>>> +                    unsigned int nr)
>>>   {
>>>       /*
>>>        * ptep_clear_flush_young() technically requires us to clear the access
>>> @@ -518,41 +519,45 @@ int contpte_ptep_test_and_clear_young(struct
>>> vm_area_struct *vma,
>>>        * contig range when the range is covered by a single folio, we can get
>>>        * away with clearing young for the whole contig range here, so we avoid
>>>        * having to unfold.
>>> +     *
>>> +     * The 'nr' means consecutive (present) PTEs that map consecutive pages
>>> +     * of the same large folio in a single VMA and a single page table.
>>>        */
>>>   +    unsigned long end = addr + nr * PAGE_SIZE;
>>>       int young = 0;
>>> -    int i;
>>>   -    ptep = contpte_align_down(ptep);
>>> -    addr = ALIGN_DOWN(addr, CONT_PTE_SIZE);
>>> -
>>> -    for (i = 0; i < CONT_PTES; i++, ptep++, addr += PAGE_SIZE)
>>> +    ptep = contpte_align_addr_ptep(&addr, &end, ptep, nr);
>>> +    for (; addr != end; ptep++, addr += PAGE_SIZE)
>>>           young |= __ptep_test_and_clear_young(vma, addr, ptep);
>>>         return young;
>>>   }
>>> -EXPORT_SYMBOL_GPL(contpte_ptep_test_and_clear_young);
>>> +EXPORT_SYMBOL_GPL(contpte_test_and_clear_young_ptes);
>>>   -int contpte_ptep_clear_flush_young(struct vm_area_struct *vma,
>>> -                    unsigned long addr, pte_t *ptep)
>>> +int contpte_clear_flush_young_ptes(struct vm_area_struct *vma,
>>> +                unsigned long addr, pte_t *ptep,
>>> +                unsigned int nr)
>>>   {
>>>       int young;
>>>   -    young = contpte_ptep_test_and_clear_young(vma, addr, ptep);
>>> +    young = contpte_test_and_clear_young_ptes(vma, addr, ptep, nr);
>>>         if (young) {
>>> +        unsigned long end = addr + nr * PAGE_SIZE;
>>> +
>>> +        contpte_align_addr_ptep(&addr, &end, ptep, nr);
>>>           /*
>>>            * See comment in __ptep_clear_flush_young(); same rationale for
>>>            * eliding the trailing DSB applies here.
>>>            */
>>> -        addr = ALIGN_DOWN(addr, CONT_PTE_SIZE);
>>> -        __flush_tlb_range_nosync(vma->vm_mm, addr, addr + CONT_PTE_SIZE,
>>> +        __flush_tlb_range_nosync(vma->vm_mm, addr, end,
>>>                        PAGE_SIZE, true, 3);
>>
>> Hmm... The requirement is that we must flush the _page_ if clearing access for a
>> pte that does not have the contiguous bit set, or we must flush the _contpte
>> block_ if clearing access for a pte that does have the contiguous bit set.
>>
>> With your changes, you may call for a large range that covers multiple contpte
>> blocks but only has a single pte in a single contpte block for which the access
>> bit was previously set. But that will cause flushing the TLB for the full range.
>> Could this cause a performance issue? Yes, no, maybe... I think it's unlikely
>> but I wouldn't rule it out in some edge case.
>>
>> I wonder if it's better to track the sub-ranges where access was cleared and
>> only issue tlbi for those sub-ranges? Probably just keep it simple (the way you
>> have done it) until/unless we see an actual problem?
> 
> Good question. Indeed, as you said, we flush the TLB per folio now, which might
> increase the flush range. However, I think this approach is relatively
> reasonable for now.
> 
> First, the mm-core also tracks the access status per folio, and it's really
> unnecessary to add excessive complexity to track the access status of sub-pages
> (or sub-ranges). I can already imagine that tracking the access status for each
> cont-block range as well as for non-cont pages across the entire large folio
> range, which can be too complicated.
> 
> Second, __flush_tlb_range_nosync() is a lightweight flush. I quickly ran a
> measurement on my machine and found that the overhead of
> __flush_tlb_range_nosync() barely changes between nr=16 and nr=256 (both are
> around 40 ns).

I'm not concerned about the direct cost of the flush; I agree it should be
lightweight given we elide the trailing DSB. (although there is a possible case
on older HW that doesn't support TLBI-by-range where this will be converted to
multiple TLBI-by-page instructions and that can cause stalls if there are too
many of them).

My concern was the opportunity cost of evicting the entries for all the
non-accessed parts of the folio from the TLB. But of course, I'm talking
nonsense because the architecture does not allow caching non-accessed entries in
the TLB.

So doesn't sound like a problem; I think we can ignore this. Sorry for the noise.

> 
> Therefore, I would still prefer to keep the logic here simple.

Agreed.

Thanks,
Ryan

Re: [PATCH v4 3/5] arm64: mm: support batch clearing of the young flag for large folios
Posted by Baolin Wang 1 month ago

On 1/2/26 8:12 PM, Ryan Roberts wrote:
> On 25/12/2025 02:48, Baolin Wang wrote:
>>
>>
>> On 2025/12/24 22:07, Ryan Roberts wrote:
>>> On 23/12/2025 05:48, Baolin Wang wrote:
>>>> Currently, contpte_ptep_test_and_clear_young() and
>>>> contpte_ptep_clear_flush_young()
>>>> only clear the young flag and flush TLBs for PTEs within the contiguous range.
>>>> To support batch PTE operations for other sized large folios in the following
>>>> patches, adding a new parameter to specify the number of PTEs that map
>>>> consecutive
>>>> pages of the same large folio in a single VMA and a single page table.
>>>>
>>>> While we are at it, rename the functions to maintain consistency with other
>>>> contpte_*() functions.
>>>>
>>>> Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
>>>> ---
>>>>    arch/arm64/include/asm/pgtable.h | 12 ++++++------
>>>>    arch/arm64/mm/contpte.c          | 33 ++++++++++++++++++--------------
>>>>    2 files changed, 25 insertions(+), 20 deletions(-)
>>>>
>>>> diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
>>>> index 445e18e92221..d5fbe72e820a 100644
>>>> --- a/arch/arm64/include/asm/pgtable.h
>>>> +++ b/arch/arm64/include/asm/pgtable.h
>>>> @@ -1648,10 +1648,10 @@ extern void contpte_clear_full_ptes(struct mm_struct
>>>> *mm, unsigned long addr,
>>>>    extern pte_t contpte_get_and_clear_full_ptes(struct mm_struct *mm,
>>>>                    unsigned long addr, pte_t *ptep,
>>>>                    unsigned int nr, int full);
>>>> -extern int contpte_ptep_test_and_clear_young(struct vm_area_struct *vma,
>>>> -                unsigned long addr, pte_t *ptep);
>>>> -extern int contpte_ptep_clear_flush_young(struct vm_area_struct *vma,
>>>> -                unsigned long addr, pte_t *ptep);
>>>> +int contpte_test_and_clear_young_ptes(struct vm_area_struct *vma,
>>>> +                unsigned long addr, pte_t *ptep, unsigned int nr);
>>>> +int contpte_clear_flush_young_ptes(struct vm_area_struct *vma,
>>>> +                unsigned long addr, pte_t *ptep, unsigned int nr);
>>>>    extern void contpte_wrprotect_ptes(struct mm_struct *mm, unsigned long addr,
>>>>                    pte_t *ptep, unsigned int nr);
>>>>    extern int contpte_ptep_set_access_flags(struct vm_area_struct *vma,
>>>> @@ -1823,7 +1823,7 @@ static inline int ptep_test_and_clear_young(struct
>>>> vm_area_struct *vma,
>>>>        if (likely(!pte_valid_cont(orig_pte)))
>>>>            return __ptep_test_and_clear_young(vma, addr, ptep);
>>>>    -    return contpte_ptep_test_and_clear_young(vma, addr, ptep);
>>>> +    return contpte_test_and_clear_young_ptes(vma, addr, ptep, CONT_PTES);
>>>
>>> As per your fixup patch, I agree that nr should be 1 here, not CONT_PTES.
>>
>> Yes.
>>
>>>>    }
>>>>      #define __HAVE_ARCH_PTEP_CLEAR_YOUNG_FLUSH
>>>> @@ -1835,7 +1835,7 @@ static inline int ptep_clear_flush_young(struct
>>>> vm_area_struct *vma,
>>>>        if (likely(!pte_valid_cont(orig_pte)))
>>>>            return __ptep_clear_flush_young(vma, addr, ptep);
>>>>    -    return contpte_ptep_clear_flush_young(vma, addr, ptep);
>>>> +    return contpte_clear_flush_young_ptes(vma, addr, ptep, CONT_PTES);
>>>
>>> And same here.
>>>
>>>>    }
>>>>      #define wrprotect_ptes wrprotect_ptes
>>>> diff --git a/arch/arm64/mm/contpte.c b/arch/arm64/mm/contpte.c
>>>> index e4ddeb46f25d..b929a455103f 100644
>>>> --- a/arch/arm64/mm/contpte.c
>>>> +++ b/arch/arm64/mm/contpte.c
>>>> @@ -508,8 +508,9 @@ pte_t contpte_get_and_clear_full_ptes(struct mm_struct *mm,
>>>>    }
>>>>    EXPORT_SYMBOL_GPL(contpte_get_and_clear_full_ptes);
>>>>    -int contpte_ptep_test_and_clear_young(struct vm_area_struct *vma,
>>>> -                    unsigned long addr, pte_t *ptep)
>>>> +int contpte_test_and_clear_young_ptes(struct vm_area_struct *vma,
>>>> +                    unsigned long addr, pte_t *ptep,
>>>> +                    unsigned int nr)
>>>>    {
>>>>        /*
>>>>         * ptep_clear_flush_young() technically requires us to clear the access
>>>> @@ -518,41 +519,45 @@ int contpte_ptep_test_and_clear_young(struct
>>>> vm_area_struct *vma,
>>>>         * contig range when the range is covered by a single folio, we can get
>>>>         * away with clearing young for the whole contig range here, so we avoid
>>>>         * having to unfold.
>>>> +     *
>>>> +     * The 'nr' means consecutive (present) PTEs that map consecutive pages
>>>> +     * of the same large folio in a single VMA and a single page table.
>>>>         */
>>>>    +    unsigned long end = addr + nr * PAGE_SIZE;
>>>>        int young = 0;
>>>> -    int i;
>>>>    -    ptep = contpte_align_down(ptep);
>>>> -    addr = ALIGN_DOWN(addr, CONT_PTE_SIZE);
>>>> -
>>>> -    for (i = 0; i < CONT_PTES; i++, ptep++, addr += PAGE_SIZE)
>>>> +    ptep = contpte_align_addr_ptep(&addr, &end, ptep, nr);
>>>> +    for (; addr != end; ptep++, addr += PAGE_SIZE)
>>>>            young |= __ptep_test_and_clear_young(vma, addr, ptep);
>>>>          return young;
>>>>    }
>>>> -EXPORT_SYMBOL_GPL(contpte_ptep_test_and_clear_young);
>>>> +EXPORT_SYMBOL_GPL(contpte_test_and_clear_young_ptes);
>>>>    -int contpte_ptep_clear_flush_young(struct vm_area_struct *vma,
>>>> -                    unsigned long addr, pte_t *ptep)
>>>> +int contpte_clear_flush_young_ptes(struct vm_area_struct *vma,
>>>> +                unsigned long addr, pte_t *ptep,
>>>> +                unsigned int nr)
>>>>    {
>>>>        int young;
>>>>    -    young = contpte_ptep_test_and_clear_young(vma, addr, ptep);
>>>> +    young = contpte_test_and_clear_young_ptes(vma, addr, ptep, nr);
>>>>          if (young) {
>>>> +        unsigned long end = addr + nr * PAGE_SIZE;
>>>> +
>>>> +        contpte_align_addr_ptep(&addr, &end, ptep, nr);
>>>>            /*
>>>>             * See comment in __ptep_clear_flush_young(); same rationale for
>>>>             * eliding the trailing DSB applies here.
>>>>             */
>>>> -        addr = ALIGN_DOWN(addr, CONT_PTE_SIZE);
>>>> -        __flush_tlb_range_nosync(vma->vm_mm, addr, addr + CONT_PTE_SIZE,
>>>> +        __flush_tlb_range_nosync(vma->vm_mm, addr, end,
>>>>                         PAGE_SIZE, true, 3);
>>>
>>> Hmm... The requirement is that we must flush the _page_ if clearing access for a
>>> pte that does not have the contiguous bit set, or we must flush the _contpte
>>> block_ if clearing access for a pte that does have the contiguous bit set.
>>>
>>> With your changes, you may call for a large range that covers multiple contpte
>>> blocks but only has a single pte in a single contpte block for which the access
>>> bit was previously set. But that will cause flushing the TLB for the full range.
>>> Could this cause a performance issue? Yes, no, maybe... I think it's unlikely
>>> but I wouldn't rule it out in some edge case.
>>>
>>> I wonder if it's better to track the sub-ranges where access was cleared and
>>> only issue tlbi for those sub-ranges? Probably just keep it simple (the way you
>>> have done it) until/unless we see an actual problem?
>>
>> Good question. Indeed, as you said, we flush the TLB per folio now, which might
>> increase the flush range. However, I think this approach is relatively
>> reasonable for now.
>>
>> First, the mm-core also tracks the access status per folio, and it's really
>> unnecessary to add excessive complexity to track the access status of sub-pages
>> (or sub-ranges). I can already imagine that tracking the access status for each
>> cont-block range as well as for non-cont pages across the entire large folio
>> range, which can be too complicated.
>>
>> Second, __flush_tlb_range_nosync() is a lightweight flush. I quickly ran a
>> measurement on my machine and found that the overhead of
>> __flush_tlb_range_nosync() barely changes between nr=16 and nr=256 (both are
>> around 40 ns).
> 
> I'm not concerned about the direct cost of the flush; I agree it should be
> lightweight given we elide the trailing DSB. (although there is a possible case
> on older HW that doesn't support TLBI-by-range where this will be converted to
> multiple TLBI-by-page instructions and that can cause stalls if there are too
> many of them).
> 
> My concern was the opportunity cost of evicting the entries for all the
> non-accessed parts of the folio from the TLB. But of course, I'm talking
> nonsense because the architecture does not allow caching non-accessed entries in
> the TLB.

Ah, now I understand your concern:). Yes, agree.

> So doesn't sound like a problem; I think we can ignore this. Sorry for the noise.

OK. No worries.

>> Therefore, I would still prefer to keep the logic here simple.
> 
> Agreed.

Thanks for your reviewing and valuable input.
[PATCH] arm64: mm: fix passing the incorrect 'CONT_PTES' for non-batched APIs
Posted by Baolin Wang 1 month, 2 weeks ago
Since contpte_test_and_clear_young_ptes() and contpte_clear_flush_young_ptes
have  already performed CONT_PTE_SIZE alignment and will clear the young flag
for the entire cont block, Their non-batched callers do not need to pass in
'CONT_PTES' to specify the cont block range. Otherwise, it may exceed the
range of a single cont block for the non-batched cases.

Signed-off-by: Baolin Wang <baolin.wang@linux.alibaba.com>
---
Hi Andrew,

As I conducted more tests, I found that the ptep_test_and_clear_young() operation
may clear the young flag beyond a single cont block range, causing issues.
Please fold this fixup into this patch to solve this issue. Thanks.
---
 arch/arm64/include/asm/pgtable.h | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/arm64/include/asm/pgtable.h b/arch/arm64/include/asm/pgtable.h
index d5fbe72e820a..5e9ff16146c3 100644
--- a/arch/arm64/include/asm/pgtable.h
+++ b/arch/arm64/include/asm/pgtable.h
@@ -1823,7 +1823,7 @@ static inline int ptep_test_and_clear_young(struct vm_area_struct *vma,
 	if (likely(!pte_valid_cont(orig_pte)))
 		return __ptep_test_and_clear_young(vma, addr, ptep);
 
-	return contpte_test_and_clear_young_ptes(vma, addr, ptep, CONT_PTES);
+	return contpte_test_and_clear_young_ptes(vma, addr, ptep, 1);
 }
 
 #define __HAVE_ARCH_PTEP_CLEAR_YOUNG_FLUSH
@@ -1835,7 +1835,7 @@ static inline int ptep_clear_flush_young(struct vm_area_struct *vma,
 	if (likely(!pte_valid_cont(orig_pte)))
 		return __ptep_clear_flush_young(vma, addr, ptep);
 
-	return contpte_clear_flush_young_ptes(vma, addr, ptep, CONT_PTES);
+	return contpte_clear_flush_young_ptes(vma, addr, ptep, 1);
 }
 
 #define wrprotect_ptes wrprotect_ptes
-- 
2.47.3