[PATCH 06/10] arm64: mm: Simplify __TLBI_RANGE_NUM() macro

Will Deacon posted 10 patches 2 months, 3 weeks ago
[PATCH 06/10] arm64: mm: Simplify __TLBI_RANGE_NUM() macro
Posted by Will Deacon 2 months, 3 weeks ago
Since commit e2768b798a19 ("arm64/mm: Modify range-based tlbi to
decrement scale"), we don't need to clamp the 'pages' argument to fit
the range for the specified 'scale' as we know that the upper bits will
have been processed in a prior iteration.

Drop the clamping and simplify the __TLBI_RANGE_NUM() macro.

Signed-off-by: Will Deacon <will@kernel.org>
---
 arch/arm64/include/asm/tlbflush.h | 6 +-----
 1 file changed, 1 insertion(+), 5 deletions(-)

diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h
index ddd77e92b268..a8d21e52ef3a 100644
--- a/arch/arm64/include/asm/tlbflush.h
+++ b/arch/arm64/include/asm/tlbflush.h
@@ -205,11 +205,7 @@ static __always_inline void __tlbi_level(const enum tlbi_op op, u64 addr, u32 le
  * range.
  */
 #define __TLBI_RANGE_NUM(pages, scale)					\
-	({								\
-		int __pages = min((pages),				\
-				  __TLBI_RANGE_PAGES(31, (scale)));	\
-		(__pages >> (5 * (scale) + 1)) - 1;			\
-	})
+	(((pages) >> (5 * (scale) + 1)) - 1)
 
 /*
  *	TLB Invalidation
-- 
2.50.0.727.gbf7dc18ff4-goog
Re: [PATCH 06/10] arm64: mm: Simplify __TLBI_RANGE_NUM() macro
Posted by Dev Jain 2 months, 3 weeks ago
On 11/07/25 9:47 pm, Will Deacon wrote:
> Since commit e2768b798a19 ("arm64/mm: Modify range-based tlbi to
> decrement scale"), we don't need to clamp the 'pages' argument to fit
> the range for the specified 'scale' as we know that the upper bits will
> have been processed in a prior iteration.
>
> Drop the clamping and simplify the __TLBI_RANGE_NUM() macro.
>
> Signed-off-by: Will Deacon <will@kernel.org>

Just for the sake of it:

It is easy to check that for natural numbers x and y,

x < (((x >> y) + 1) << y)

This implies x < ((x >> y) << y) + (1 << y)
=> x - ((x >> y) << y) < (1 << y)

Substitute x as pages, and y as 5 * scale + 1, the
inequation means that after the first iteration, the
new value of pages will be strictly less than 1 << (5 * scale + 1)
=> the current scale won't be repeated => the min is redundant.

Reviewed-by: Dev Jain <dev.jain@arm.com>

> ---
>   arch/arm64/include/asm/tlbflush.h | 6 +-----
>   1 file changed, 1 insertion(+), 5 deletions(-)
>
> diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h
> index ddd77e92b268..a8d21e52ef3a 100644
> --- a/arch/arm64/include/asm/tlbflush.h
> +++ b/arch/arm64/include/asm/tlbflush.h
> @@ -205,11 +205,7 @@ static __always_inline void __tlbi_level(const enum tlbi_op op, u64 addr, u32 le
>    * range.
>    */
>   #define __TLBI_RANGE_NUM(pages, scale)					\
> -	({								\
> -		int __pages = min((pages),				\
> -				  __TLBI_RANGE_PAGES(31, (scale)));	\
> -		(__pages >> (5 * (scale) + 1)) - 1;			\
> -	})
> +	(((pages) >> (5 * (scale) + 1)) - 1)
>   
>   /*
>    *	TLB Invalidation
Re: [PATCH 06/10] arm64: mm: Simplify __TLBI_RANGE_NUM() macro
Posted by Ryan Roberts 2 months, 3 weeks ago
On 11/07/2025 17:17, Will Deacon wrote:
> Since commit e2768b798a19 ("arm64/mm: Modify range-based tlbi to
> decrement scale"), we don't need to clamp the 'pages' argument to fit
> the range for the specified 'scale' as we know that the upper bits will
> have been processed in a prior iteration.
> 
> Drop the clamping and simplify the __TLBI_RANGE_NUM() macro.
> 
> Signed-off-by: Will Deacon <will@kernel.org>

Seems reasonable:

Reviewed-by: Ryan Roberts <ryan.roberts@arm.com>

> ---
>  arch/arm64/include/asm/tlbflush.h | 6 +-----
>  1 file changed, 1 insertion(+), 5 deletions(-)
> 
> diff --git a/arch/arm64/include/asm/tlbflush.h b/arch/arm64/include/asm/tlbflush.h
> index ddd77e92b268..a8d21e52ef3a 100644
> --- a/arch/arm64/include/asm/tlbflush.h
> +++ b/arch/arm64/include/asm/tlbflush.h
> @@ -205,11 +205,7 @@ static __always_inline void __tlbi_level(const enum tlbi_op op, u64 addr, u32 le
>   * range.
>   */
>  #define __TLBI_RANGE_NUM(pages, scale)					\
> -	({								\
> -		int __pages = min((pages),				\
> -				  __TLBI_RANGE_PAGES(31, (scale)));	\
> -		(__pages >> (5 * (scale) + 1)) - 1;			\
> -	})
> +	(((pages) >> (5 * (scale) + 1)) - 1)
>  
>  /*
>   *	TLB Invalidation