[PATCH v11 8/8] mm: folio_zero_user: cache neighbouring pages

Ankur Arora posted 8 patches 1 month ago
[PATCH v11 8/8] mm: folio_zero_user: cache neighbouring pages
Posted by Ankur Arora 1 month ago
folio_zero_user() does straight zeroing without caring about
temporal locality for caches.

This replaced commit c6ddfb6c5890 ("mm, clear_huge_page: move order
algorithm into a separate function") where we cleared a page at a
time converging to the faulting page from the left and the right.

To retain limited temporal locality, split the clearing in three
parts: the faulting page and its immediate neighbourhood, and the
regions on its left and right. We clear the local neighbourhood last
to maximize chances of it sticking around in the cache.

Performance
===

AMD Genoa (EPYC 9J14, cpus=2 sockets * 96 cores * 2 threads,
           memory=2.2 TB, L1d=16K/thread, L2=512K/thread, L3=2MB/thread)

vm-scalability/anon-w-seq-hugetlb: this workload runs with 384 processes
(one for each CPU) each zeroing anonymously mapped hugetlb memory which
is then accessed sequentially.
                                stime                utime

  discontiguous-page      1739.93 ( +- 6.15% )  1016.61 ( +- 4.75% )
  contiguous-page         1853.70 ( +- 2.51% )  1187.13 ( +- 3.50% )
  batched-pages           1756.75 ( +- 2.98% )  1133.32 ( +- 4.89% )
  neighbourhood-last      1725.18 ( +- 4.59% )  1123.78 ( +- 7.38% )

Both stime and utime largely respond somewhat expectedly. There is a
fair amount of run to run variation but the general trend is that the
stime drops and utime increases. There are a few oddities, like
contiguous-page performing very differently from batched-pages.

As such this is likely an uncommon pattern where we saturate the memory
bandwidth (since all CPUs are running the test) and at the same time
are cache constrained because we access the entire region.

Kernel make (make -j 12 bzImage):

                              stime                  utime

  discontiguous-page      199.29 ( +- 0.63% )   1431.67 ( +- .04% )
  contiguous-page         193.76 ( +- 0.58% )   1433.60 ( +- .05% )
  batched-pages           193.92 ( +- 0.76% )   1431.04 ( +- .08% )
  neighbourhood-last      194.46 ( +- 0.68% )   1431.51 ( +- .06% )

For make the utime stays relatively flat with a fairly small (-2.4%)
improvement in the stime.

Signed-off-by: Ankur Arora <ankur.a.arora@oracle.com>
Reviewed-by: Raghavendra K T <raghavendra.kt@amd.com>
Tested-by: Raghavendra K T <raghavendra.kt@amd.com>
---
 mm/memory.c | 41 ++++++++++++++++++++++++++++++++++++++---
 1 file changed, 38 insertions(+), 3 deletions(-)

diff --git a/mm/memory.c b/mm/memory.c
index 49e7154121f5..a27ef2eb92db 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -7262,6 +7262,15 @@ static void clear_contig_highpages(struct page *page, unsigned long addr,
 	}
 }
 
+/*
+ * When zeroing a folio, we want to differentiate between pages in the
+ * vicinity of the faulting address where we have spatial and temporal
+ * locality, and those far away where we don't.
+ *
+ * Use a radius of 2 for determining the local neighbourhood.
+ */
+#define FOLIO_ZERO_LOCALITY_RADIUS	2
+
 /**
  * folio_zero_user - Zero a folio which will be mapped to userspace.
  * @folio: The folio to zero.
@@ -7269,10 +7278,36 @@ static void clear_contig_highpages(struct page *page, unsigned long addr,
  */
 void folio_zero_user(struct folio *folio, unsigned long addr_hint)
 {
-	unsigned long base_addr = ALIGN_DOWN(addr_hint, folio_size(folio));
+	const unsigned long base_addr = ALIGN_DOWN(addr_hint, folio_size(folio));
+	const long fault_idx = (addr_hint - base_addr) / PAGE_SIZE;
+	const struct range pg = DEFINE_RANGE(0, folio_nr_pages(folio) - 1);
+	const int radius = FOLIO_ZERO_LOCALITY_RADIUS;
+	struct range r[3];
+	int i;
 
-	clear_contig_highpages(folio_page(folio, 0),
-				base_addr, folio_nr_pages(folio));
+	/*
+	 * Faulting page and its immediate neighbourhood. Will be cleared at the
+	 * end to keep its cachelines hot.
+	 */
+	r[2] = DEFINE_RANGE(clamp_t(s64, fault_idx - radius, pg.start, pg.end),
+			    clamp_t(s64, fault_idx + radius, pg.start, pg.end));
+
+	/* Region to the left of the fault */
+	r[1] = DEFINE_RANGE(pg.start,
+			    clamp_t(s64, r[2].start - 1, pg.start - 1, r[2].start));
+
+	/* Region to the right of the fault: always valid for the common fault_idx=0 case. */
+	r[0] = DEFINE_RANGE(clamp_t(s64, r[2].end + 1, r[2].end, pg.end + 1),
+			    pg.end);
+
+	for (i = 0; i < ARRAY_SIZE(r); i++) {
+		const unsigned long addr = base_addr + r[i].start * PAGE_SIZE;
+		const unsigned int nr_pages = range_len(&r[i]);
+		struct page *page = folio_page(folio, r[i].start);
+
+		if (nr_pages > 0)
+			clear_contig_highpages(page, addr, nr_pages);
+	}
 }
 
 static int copy_user_gigantic_page(struct folio *dst, struct folio *src,
-- 
2.31.1
Re: [PATCH v11 8/8] mm: folio_zero_user: cache neighbouring pages
Posted by David Hildenbrand (Red Hat) 1 month ago
On 1/7/26 08:20, Ankur Arora wrote:
> folio_zero_user() does straight zeroing without caring about
> temporal locality for caches.
> 
> This replaced commit c6ddfb6c5890 ("mm, clear_huge_page: move order
> algorithm into a separate function") where we cleared a page at a
> time converging to the faulting page from the left and the right.
> 
> To retain limited temporal locality, split the clearing in three
> parts: the faulting page and its immediate neighbourhood, and the
> regions on its left and right. We clear the local neighbourhood last
> to maximize chances of it sticking around in the cache.
> 
> Performance
> ===
> 
> AMD Genoa (EPYC 9J14, cpus=2 sockets * 96 cores * 2 threads,
>             memory=2.2 TB, L1d=16K/thread, L2=512K/thread, L3=2MB/thread)
> 
> vm-scalability/anon-w-seq-hugetlb: this workload runs with 384 processes
> (one for each CPU) each zeroing anonymously mapped hugetlb memory which
> is then accessed sequentially.
>                                  stime                utime
> 
>    discontiguous-page      1739.93 ( +- 6.15% )  1016.61 ( +- 4.75% )
>    contiguous-page         1853.70 ( +- 2.51% )  1187.13 ( +- 3.50% )
>    batched-pages           1756.75 ( +- 2.98% )  1133.32 ( +- 4.89% )
>    neighbourhood-last      1725.18 ( +- 4.59% )  1123.78 ( +- 7.38% )
> 
> Both stime and utime largely respond somewhat expectedly. There is a
> fair amount of run to run variation but the general trend is that the
> stime drops and utime increases. There are a few oddities, like
> contiguous-page performing very differently from batched-pages.
> 
> As such this is likely an uncommon pattern where we saturate the memory
> bandwidth (since all CPUs are running the test) and at the same time
> are cache constrained because we access the entire region.
> 
> Kernel make (make -j 12 bzImage):
> 
>                                stime                  utime
> 
>    discontiguous-page      199.29 ( +- 0.63% )   1431.67 ( +- .04% )
>    contiguous-page         193.76 ( +- 0.58% )   1433.60 ( +- .05% )
>    batched-pages           193.92 ( +- 0.76% )   1431.04 ( +- .08% )
>    neighbourhood-last      194.46 ( +- 0.68% )   1431.51 ( +- .06% )
> 
> For make the utime stays relatively flat with a fairly small (-2.4%)
> improvement in the stime.
> 
> Signed-off-by: Ankur Arora <ankur.a.arora@oracle.com>
> Reviewed-by: Raghavendra K T <raghavendra.kt@amd.com>
> Tested-by: Raghavendra K T <raghavendra.kt@amd.com>
> ---

Nothing jumped at me, thanks!

Acked-by: David Hildenbrand (Red Hat) <david@kernel.org>

-- 
Cheers

David
[PATCH v3] mm: folio_zero_user: open code range computation in folio_zero_user()
Posted by Ankur Arora 2 days, 7 hours ago
riscv64-gcc-linux-gnu (v8.5) reports a compile time assert in:

   r[2] = DEFINE_RANGE(clamp_t(s64, fault_idx - radius, pg.start, pg.end),
 		       clamp_t(s64, fault_idx + radius, pg.start, pg.end));

where it decides that pg.start > pg.end in:
  clamp_t(s64, fault_idx + radius, pg.start, pg.end));

where pg comes from:
  const struct range pg = DEFINE_RANGE(0, folio_nr_pages(folio) - 1);

That does not seem like it could be true. Even for pg.start == pg.end,
we would need folio_test_large() to evaluate to false at compile time:

  static inline unsigned long folio_nr_pages(const struct folio *folio)
  {
	if (!folio_test_large(folio))
		return 1;
	return folio_large_nr_pages(folio);
  }

Workaround by open coding the range computation. Also, simplify the type
declarations for the relevant variables.

Reported-by: kernel test robot <lkp@intel.com>
Closes: https://lore.kernel.org/oe-kbuild-all/202601240453.QCjgGdJa-lkp@intel.com/
Fixes: 93552c9a3350 ("mm: folio_zero_user: cache neighbouring pages")
Signed-off-by: Ankur Arora <ankur.a.arora@oracle.com>
Acked-by: David Hildenbrand (Arm) <david@kernel.org>
---

Hi Andrew

This version removes an unnecessary cast introduced in v2, which gets
rid of this hunk:

 	if (nr_pages > 0)
-		clear_contig_highpages(page, addr, nr_pages);
+		clear_contig_highpages(page, addr, (unsigned int)nr_pages);

Could you queue this one instead?

Thanks
Ankur

 mm/memory.c | 15 +++++++--------
 1 file changed, 7 insertions(+), 8 deletions(-)

diff --git a/mm/memory.c b/mm/memory.c
index ce933ee4a3dd..b15f11a0bfa8 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -7284,7 +7284,7 @@ void folio_zero_user(struct folio *folio, unsigned long addr_hint)
 	const unsigned long base_addr = ALIGN_DOWN(addr_hint, folio_size(folio));
 	const long fault_idx = (addr_hint - base_addr) / PAGE_SIZE;
 	const struct range pg = DEFINE_RANGE(0, folio_nr_pages(folio) - 1);
-	const int radius = FOLIO_ZERO_LOCALITY_RADIUS;
+	const long radius = FOLIO_ZERO_LOCALITY_RADIUS;
 	struct range r[3];
 	int i;
 
@@ -7292,20 +7292,19 @@ void folio_zero_user(struct folio *folio, unsigned long addr_hint)
 	 * Faulting page and its immediate neighbourhood. Will be cleared at the
 	 * end to keep its cachelines hot.
 	 */
-	r[2] = DEFINE_RANGE(clamp_t(s64, fault_idx - radius, pg.start, pg.end),
-			    clamp_t(s64, fault_idx + radius, pg.start, pg.end));
+	r[2] = DEFINE_RANGE(fault_idx - radius < (long)pg.start ? pg.start : fault_idx - radius,
+			    fault_idx + radius > (long)pg.end   ? pg.end   : fault_idx + radius);
+
 
 	/* Region to the left of the fault */
-	r[1] = DEFINE_RANGE(pg.start,
-			    clamp_t(s64, r[2].start - 1, pg.start - 1, r[2].start));
+	r[1] = DEFINE_RANGE(pg.start, r[2].start - 1);
 
 	/* Region to the right of the fault: always valid for the common fault_idx=0 case. */
-	r[0] = DEFINE_RANGE(clamp_t(s64, r[2].end + 1, r[2].end, pg.end + 1),
-			    pg.end);
+	r[0] = DEFINE_RANGE(r[2].end + 1, pg.end);
 
 	for (i = 0; i < ARRAY_SIZE(r); i++) {
 		const unsigned long addr = base_addr + r[i].start * PAGE_SIZE;
-		const unsigned int nr_pages = range_len(&r[i]);
+		const long nr_pages = (long)range_len(&r[i]);
 		struct page *page = folio_page(folio, r[i].start);
 
 		if (nr_pages > 0)
-- 
2.31.1
Re: [PATCH v3] mm: folio_zero_user: open code range computation in folio_zero_user()
Posted by David Hildenbrand (Arm) 1 day, 20 hours ago
On 2/6/26 23:38, Ankur Arora wrote:
> riscv64-gcc-linux-gnu (v8.5) reports a compile time assert in:
> 
>     r[2] = DEFINE_RANGE(clamp_t(s64, fault_idx - radius, pg.start, pg.end),
>   		       clamp_t(s64, fault_idx + radius, pg.start, pg.end));
> 
> where it decides that pg.start > pg.end in:
>    clamp_t(s64, fault_idx + radius, pg.start, pg.end));
> 
> where pg comes from:
>    const struct range pg = DEFINE_RANGE(0, folio_nr_pages(folio) - 1);
> 
> That does not seem like it could be true. Even for pg.start == pg.end,
> we would need folio_test_large() to evaluate to false at compile time:
> 
>    static inline unsigned long folio_nr_pages(const struct folio *folio)
>    {
> 	if (!folio_test_large(folio))
> 		return 1;
> 	return folio_large_nr_pages(folio);
>    }
> 
> Workaround by open coding the range computation. Also, simplify the type
> declarations for the relevant variables.
> 
> Reported-by: kernel test robot <lkp@intel.com>
> Closes: https://lore.kernel.org/oe-kbuild-all/202601240453.QCjgGdJa-lkp@intel.com/
> Fixes: 93552c9a3350 ("mm: folio_zero_user: cache neighbouring pages")
> Signed-off-by: Ankur Arora <ankur.a.arora@oracle.com>
> Acked-by: David Hildenbrand (Arm) <david@kernel.org>
> ---
> 
> Hi Andrew
> 

Thanks Amkur and hoping you'll have a nice weekend!

-- 
Cheers,

David
Re: [PATCH v3] mm: folio_zero_user: open code range computation in folio_zero_user()
Posted by Ankur Arora 5 hours ago
David Hildenbrand (Arm) <david@kernel.org> writes:

> On 2/6/26 23:38, Ankur Arora wrote:
>> riscv64-gcc-linux-gnu (v8.5) reports a compile time assert in:
>>     r[2] = DEFINE_RANGE(clamp_t(s64, fault_idx - radius, pg.start, pg.end),
>>   		       clamp_t(s64, fault_idx + radius, pg.start, pg.end));
>> where it decides that pg.start > pg.end in:
>>    clamp_t(s64, fault_idx + radius, pg.start, pg.end));
>> where pg comes from:
>>    const struct range pg = DEFINE_RANGE(0, folio_nr_pages(folio) - 1);
>> That does not seem like it could be true. Even for pg.start == pg.end,
>> we would need folio_test_large() to evaluate to false at compile time:
>>    static inline unsigned long folio_nr_pages(const struct folio *folio)
>>    {
>> 	if (!folio_test_large(folio))
>> 		return 1;
>> 	return folio_large_nr_pages(folio);
>>    }
>> Workaround by open coding the range computation. Also, simplify the type
>> declarations for the relevant variables.
>> Reported-by: kernel test robot <lkp@intel.com>
>> Closes: https://lore.kernel.org/oe-kbuild-all/202601240453.QCjgGdJa-lkp@intel.com/
>> Fixes: 93552c9a3350 ("mm: folio_zero_user: cache neighbouring pages")
>> Signed-off-by: Ankur Arora <ankur.a.arora@oracle.com>
>> Acked-by: David Hildenbrand (Arm) <david@kernel.org>
>> ---
>> Hi Andrew
>>
>
> Thanks Amkur and hoping you'll have a nice weekend!

Thanks and to you too (or at least what's left of it) :).

--
ankur
[PATCH v2] mm: folio_zero_user: open code range computation in folio_zero_user()
Posted by Ankur Arora 1 week, 4 days ago
riscv64-gcc-linux-gnu (v8.5) reports a compile time assert in:

   r[2] = DEFINE_RANGE(clamp_t(s64, fault_idx - radius, pg.start, pg.end),
 		       clamp_t(s64, fault_idx + radius, pg.start, pg.end));

where it decides that pg.start > pg.end in:
  clamp_t(s64, fault_idx + radius, pg.start, pg.end));

where pg comes from:
  const struct range pg = DEFINE_RANGE(0, folio_nr_pages(folio) - 1);

That does not seem like it could be true. Even for pg.start == pg.end,
we would need folio_test_large() to evaluate to false at compile time:

  static inline unsigned long folio_nr_pages(const struct folio *folio)
  {
	if (!folio_test_large(folio))
		return 1;
	return folio_large_nr_pages(folio);
  }

Workaround by open coding the range computation. Also, simplify the type
declarations for the relevant variables.

Reported-by: kernel test robot <lkp@intel.com>
Closes: https://lore.kernel.org/oe-kbuild-all/202601240453.QCjgGdJa-lkp@intel.com/
Fixes: 93552c9a3350 ("mm: folio_zero_user: cache neighbouring pages")
Signed-off-by: Ankur Arora <ankur.a.arora@oracle.com>
---

Hi Andrew

As David pointed out, the previous open coded version makes a few
unnecessary changes. Could you queue this one instead?

Thanks
Ankur


 mm/memory.c | 17 ++++++++---------
 1 file changed, 8 insertions(+), 9 deletions(-)

diff --git a/mm/memory.c b/mm/memory.c
index ce933ee4a3dd..f5bfc082ab61 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -7284,7 +7284,7 @@ void folio_zero_user(struct folio *folio, unsigned long addr_hint)
 	const unsigned long base_addr = ALIGN_DOWN(addr_hint, folio_size(folio));
 	const long fault_idx = (addr_hint - base_addr) / PAGE_SIZE;
 	const struct range pg = DEFINE_RANGE(0, folio_nr_pages(folio) - 1);
-	const int radius = FOLIO_ZERO_LOCALITY_RADIUS;
+	const long radius = FOLIO_ZERO_LOCALITY_RADIUS;
 	struct range r[3];
 	int i;
 
@@ -7292,24 +7292,23 @@ void folio_zero_user(struct folio *folio, unsigned long addr_hint)
 	 * Faulting page and its immediate neighbourhood. Will be cleared at the
 	 * end to keep its cachelines hot.
 	 */
-	r[2] = DEFINE_RANGE(clamp_t(s64, fault_idx - radius, pg.start, pg.end),
-			    clamp_t(s64, fault_idx + radius, pg.start, pg.end));
+	r[2] = DEFINE_RANGE(fault_idx - radius < (long)pg.start ? pg.start : fault_idx - radius,
+			    fault_idx + radius > (long)pg.end   ? pg.end   : fault_idx + radius);
+
 
 	/* Region to the left of the fault */
-	r[1] = DEFINE_RANGE(pg.start,
-			    clamp_t(s64, r[2].start - 1, pg.start - 1, r[2].start));
+	r[1] = DEFINE_RANGE(pg.start, r[2].start - 1);
 
 	/* Region to the right of the fault: always valid for the common fault_idx=0 case. */
-	r[0] = DEFINE_RANGE(clamp_t(s64, r[2].end + 1, r[2].end, pg.end + 1),
-			    pg.end);
+	r[0] = DEFINE_RANGE(r[2].end + 1, pg.end);
 
 	for (i = 0; i < ARRAY_SIZE(r); i++) {
 		const unsigned long addr = base_addr + r[i].start * PAGE_SIZE;
-		const unsigned int nr_pages = range_len(&r[i]);
+		const long nr_pages = (long)range_len(&r[i]);
 		struct page *page = folio_page(folio, r[i].start);
 
 		if (nr_pages > 0)
-			clear_contig_highpages(page, addr, nr_pages);
+			clear_contig_highpages(page, addr, (unsigned int)nr_pages);
 	}
 }
 
-- 
2.31.1
Re: [PATCH v2] mm: folio_zero_user: open code range computation in folio_zero_user()
Posted by David Hildenbrand (arm) 4 days, 9 hours ago
On 1/28/26 19:59, Ankur Arora wrote:
> riscv64-gcc-linux-gnu (v8.5) reports a compile time assert in:
> 
>     r[2] = DEFINE_RANGE(clamp_t(s64, fault_idx - radius, pg.start, pg.end),
>   		       clamp_t(s64, fault_idx + radius, pg.start, pg.end));
> 
> where it decides that pg.start > pg.end in:
>    clamp_t(s64, fault_idx + radius, pg.start, pg.end));
> 
> where pg comes from:
>    const struct range pg = DEFINE_RANGE(0, folio_nr_pages(folio) - 1);
> 
> That does not seem like it could be true. Even for pg.start == pg.end,
> we would need folio_test_large() to evaluate to false at compile time:
> 
>    static inline unsigned long folio_nr_pages(const struct folio *folio)
>    {
> 	if (!folio_test_large(folio))
> 		return 1;
> 	return folio_large_nr_pages(folio);
>    }
> 
> Workaround by open coding the range computation. Also, simplify the type
> declarations for the relevant variables.
> 
> Reported-by: kernel test robot <lkp@intel.com>
> Closes: https://lore.kernel.org/oe-kbuild-all/202601240453.QCjgGdJa-lkp@intel.com/
> Fixes: 93552c9a3350 ("mm: folio_zero_user: cache neighbouring pages")
> Signed-off-by: Ankur Arora <ankur.a.arora@oracle.com>
> ---
> 
> Hi Andrew
> 
> As David pointed out, the previous open coded version makes a few
> unnecessary changes. Could you queue this one instead?
> 

I'm late, maybe this is already upstream.

> Thanks
> Ankur
> 
> 
>   mm/memory.c | 17 ++++++++---------
>   1 file changed, 8 insertions(+), 9 deletions(-)
> 
> diff --git a/mm/memory.c b/mm/memory.c
> index ce933ee4a3dd..f5bfc082ab61 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -7284,7 +7284,7 @@ void folio_zero_user(struct folio *folio, unsigned long addr_hint)
>   	const unsigned long base_addr = ALIGN_DOWN(addr_hint, folio_size(folio));
>   	const long fault_idx = (addr_hint - base_addr) / PAGE_SIZE;
>   	const struct range pg = DEFINE_RANGE(0, folio_nr_pages(folio) - 1);
> -	const int radius = FOLIO_ZERO_LOCALITY_RADIUS;
> +	const long radius = FOLIO_ZERO_LOCALITY_RADIUS;
>   	struct range r[3];
>   	int i;
>   
> @@ -7292,24 +7292,23 @@ void folio_zero_user(struct folio *folio, unsigned long addr_hint)
>   	 * Faulting page and its immediate neighbourhood. Will be cleared at the
>   	 * end to keep its cachelines hot.
>   	 */
> -	r[2] = DEFINE_RANGE(clamp_t(s64, fault_idx - radius, pg.start, pg.end),
> -			    clamp_t(s64, fault_idx + radius, pg.start, pg.end));
> +	r[2] = DEFINE_RANGE(fault_idx - radius < (long)pg.start ? pg.start : fault_idx - radius,
> +			    fault_idx + radius > (long)pg.end   ? pg.end   : fault_idx + radius);
> +

LGTM, although it could likely be made a bit more readable by using some temporary variables.


const long fault_idx_low = fault_idx - radius;
const long fault_idx_high = fault_idx + radius;

r[2] = DEFINE_RANGE(fault_idx_low < (long)pg.start ? pg.start : fault_idx_low,
		    fault_idx_high > (long)pg.end ? pg.end : fault_idx_high);

Well, still a bit unreadable, so ... :)


>   
>   	/* Region to the left of the fault */
> -	r[1] = DEFINE_RANGE(pg.start,
> -			    clamp_t(s64, r[2].start - 1, pg.start - 1, r[2].start));
> +	r[1] = DEFINE_RANGE(pg.start, r[2].start - 1);
>   
>   	/* Region to the right of the fault: always valid for the common fault_idx=0 case. */
> -	r[0] = DEFINE_RANGE(clamp_t(s64, r[2].end + 1, r[2].end, pg.end + 1),
> -			    pg.end);
> +	r[0] = DEFINE_RANGE(r[2].end + 1, pg.end);

TBH, without the clamp that looks much more readable here.

>   
>   	for (i = 0; i < ARRAY_SIZE(r); i++) {
>   		const unsigned long addr = base_addr + r[i].start * PAGE_SIZE;
> -		const unsigned int nr_pages = range_len(&r[i]);
> +		const long nr_pages = (long)range_len(&r[i]);
>   		struct page *page = folio_page(folio, r[i].start);
>   
>   		if (nr_pages > 0)
> -			clear_contig_highpages(page, addr, nr_pages);
> +			clear_contig_highpages(page, addr, (unsigned int)nr_pages);

Is that cast really required?

-- 
Cheers,

David
Re: [PATCH v2] mm: folio_zero_user: open code range computation in folio_zero_user()
Posted by Andrew Morton 4 days, 8 hours ago
On Wed, 4 Feb 2026 22:01:42 +0100 "David Hildenbrand (arm)" <david@kernel.org> wrote:

> > As David pointed out, the previous open coded version makes a few
> > unnecessary changes. Could you queue this one instead?
> > 
> 
> I'm late, maybe this is already upstream.

It's in mm-unstable.  The second round of MM upstreaming is two weeks hence.

> >   
> >   	/* Region to the left of the fault */
> > -	r[1] = DEFINE_RANGE(pg.start,
> > -			    clamp_t(s64, r[2].start - 1, pg.start - 1, r[2].start));
> > +	r[1] = DEFINE_RANGE(pg.start, r[2].start - 1);
> >   
> >   	/* Region to the right of the fault: always valid for the common fault_idx=0 case. */
> > -	r[0] = DEFINE_RANGE(clamp_t(s64, r[2].end + 1, r[2].end, pg.end + 1),
> > -			    pg.end);
> > +	r[0] = DEFINE_RANGE(r[2].end + 1, pg.end);
> 
> TBH, without the clamp that looks much more readable here.

me too.

> >   
> >   	for (i = 0; i < ARRAY_SIZE(r); i++) {
> >   		const unsigned long addr = base_addr + r[i].start * PAGE_SIZE;
> > -		const unsigned int nr_pages = range_len(&r[i]);
> > +		const long nr_pages = (long)range_len(&r[i]);
> >   		struct page *page = folio_page(folio, r[i].start);
> >   
> >   		if (nr_pages > 0)
> > -			clear_contig_highpages(page, addr, nr_pages);
> > +			clear_contig_highpages(page, addr, (unsigned int)nr_pages);
> 
> Is that cast really required?

Seems not.  The types for nr_pages are a bit chaotic - u64->long->uint.
Re: [PATCH v2] mm: folio_zero_user: open code range computation in folio_zero_user()
Posted by Ankur Arora 4 days ago
Andrew Morton <akpm@linux-foundation.org> writes:

> On Wed, 4 Feb 2026 22:01:42 +0100 "David Hildenbrand (arm)" <david@kernel.org> wrote:
>
>> > As David pointed out, the previous open coded version makes a few
>> > unnecessary changes. Could you queue this one instead?
>> >
>>
>> I'm late, maybe this is already upstream.
>
> It's in mm-unstable.  The second round of MM upstreaming is two weeks hence.
>
>> >
>> >   	/* Region to the left of the fault */
>> > -	r[1] = DEFINE_RANGE(pg.start,
>> > -			    clamp_t(s64, r[2].start - 1, pg.start - 1, r[2].start));
>> > +	r[1] = DEFINE_RANGE(pg.start, r[2].start - 1);
>> >
>> >   	/* Region to the right of the fault: always valid for the common fault_idx=0 case. */
>> > -	r[0] = DEFINE_RANGE(clamp_t(s64, r[2].end + 1, r[2].end, pg.end + 1),
>> > -			    pg.end);
>> > +	r[0] = DEFINE_RANGE(r[2].end + 1, pg.end);
>>
>> TBH, without the clamp that looks much more readable here.
>
> me too.
>
>> >
>> >   	for (i = 0; i < ARRAY_SIZE(r); i++) {
>> >   		const unsigned long addr = base_addr + r[i].start * PAGE_SIZE;
>> > -		const unsigned int nr_pages = range_len(&r[i]);
>> > +		const long nr_pages = (long)range_len(&r[i]);
>> >   		struct page *page = folio_page(folio, r[i].start);
>> >
>> >   		if (nr_pages > 0)
>> > -			clear_contig_highpages(page, addr, nr_pages);
>> > +			clear_contig_highpages(page, addr, (unsigned int)nr_pages);
>>
>> Is that cast really required?
>
> Seems not.  The types for nr_pages are a bit chaotic - u64->long->uint.

Yes agreed.

The first u64 is because currently struct range only supports that.
Then the cast to signed long is because the range can be negative
and the clear_contig_highpages() is only done if nr_pages > 0.

And, the third one is almost certainly unnecessary for any realistic
hugepage size but since nr_pages is being truncating, I wanted that
to be explicit.

--
ankur
Re: [PATCH v2] mm: folio_zero_user: open code range computation in folio_zero_user()
Posted by David Hildenbrand (Arm) 3 days, 17 hours ago
On 2/5/26 06:48, Ankur Arora wrote:
> 
> Andrew Morton <akpm@linux-foundation.org> writes:
> 
>> On Wed, 4 Feb 2026 22:01:42 +0100 "David Hildenbrand (arm)" <david@kernel.org> wrote:
>>
>>>
>>> I'm late, maybe this is already upstream.
>>
>> It's in mm-unstable.  The second round of MM upstreaming is two weeks hence.
>>
>>>
>>> TBH, without the clamp that looks much more readable here.
>>
>> me too.
>>
>>>
>>> Is that cast really required?
>>
>> Seems not.  The types for nr_pages are a bit chaotic - u64->long->uint.
> 
> Yes agreed.
> 
> The first u64 is because currently struct range only supports that.
> Then the cast to signed long is because the range can be negative
> and the clear_contig_highpages() is only done if nr_pages > 0.

That makes sense to me.

> 
> And, the third one is almost certainly unnecessary for any realistic
> hugepage size but since nr_pages is being truncating, I wanted that
> to be explicit.

But the non-silent truncation is no better? IOW, it doesn't matter.

You could just make clear_contig_highpages() consume an unsigned long ...

-- 
Cheers,

David
Re: [PATCH v2] mm: folio_zero_user: open code range computation in folio_zero_user()
Posted by Ankur Arora 3 days ago
David Hildenbrand (Arm) <david@kernel.org> writes:

> On 2/5/26 06:48, Ankur Arora wrote:
>> Andrew Morton <akpm@linux-foundation.org> writes:
>>
>>> On Wed, 4 Feb 2026 22:01:42 +0100 "David Hildenbrand (arm)" <david@kernel.org> wrote:
>>>
>>>>
>>>> I'm late, maybe this is already upstream.
>>>
>>> It's in mm-unstable.  The second round of MM upstreaming is two weeks hence.
>>>
>>>>
>>>> TBH, without the clamp that looks much more readable here.
>>>
>>> me too.
>>>
>>>>
>>>> Is that cast really required?
>>>
>>> Seems not.  The types for nr_pages are a bit chaotic - u64->long->uint.
>> Yes agreed.
>> The first u64 is because currently struct range only supports that.
>> Then the cast to signed long is because the range can be negative
>> and the clear_contig_highpages() is only done if nr_pages > 0.
>
> That makes sense to me.
>
>> And, the third one is almost certainly unnecessary for any realistic
>> hugepage size but since nr_pages is being truncating, I wanted that
>> to be explicit.
>
> But the non-silent truncation is no better? IOW, it doesn't matter.

I never seem to get them but I thought we had some kconfig option that
makes gcc give a warning to that effect.

I can update this patch to just implicitly truncate.

> You could just make clear_contig_highpages() consume an unsigned long ...

Unfortunately that'll be an even bigger mess. The clear_contig_highpages()
version in mm-stable uses the unsigned intness of nr_pages all over:

  static void clear_contig_highpages(struct page *page, unsigned long addr,
				   unsigned int nr_pages)
  {
	unsigned int i, count;
	/*
	 * When clearing we want to operate on the largest extent possible to
	 * allow for architecture specific extent based optimizations.
	 *
	 * However, since clear_user_highpages() (and primitives clear_user_pages(),
	 * clear_pages()), do not call cond_resched(), limit the unit size when
	 * running under non-preemptible scheduling models.
	 */
	const unsigned int unit = preempt_model_preemptible() ?
				   nr_pages : PROCESS_PAGES_NON_PREEMPT_BATCH;

	might_sleep();

	for (i = 0; i < nr_pages; i += count) {
		cond_resched();

		count = min(unit, nr_pages - i);
		clear_user_highpages(page + i, addr + i * PAGE_SIZE, count);
	}
  }

Thanks
--
ankur
Re: [PATCH v2] mm: folio_zero_user: open code range computation in folio_zero_user()
Posted by David Hildenbrand (Arm) 2 days, 21 hours ago
On 2/6/26 06:42, Ankur Arora wrote:
> 
> David Hildenbrand (Arm) <david@kernel.org> writes:
> 
>> On 2/5/26 06:48, Ankur Arora wrote:
>>> Andrew Morton <akpm@linux-foundation.org> writes:
>>>
>>> Yes agreed.
>>> The first u64 is because currently struct range only supports that.
>>> Then the cast to signed long is because the range can be negative
>>> and the clear_contig_highpages() is only done if nr_pages > 0.
>>
>> That makes sense to me.
>>
>>> And, the third one is almost certainly unnecessary for any realistic
>>> hugepage size but since nr_pages is being truncating, I wanted that
>>> to be explicit.
>>
>> But the non-silent truncation is no better? IOW, it doesn't matter.
> 
> I never seem to get them but I thought we had some kconfig option that
> makes gcc give a warning to that effect.

I think we do it all the time :)

> 
> I can update this patch to just implicitly truncate.
> 

Yeah, I think the implicit once can just be dropped.

>> You could just make clear_contig_highpages() consume an unsigned long ...
> 
> Unfortunately that'll be an even bigger mess. The clear_contig_highpages()
> version in mm-stable uses the unsigned intness of nr_pages all over:

Right.

In any case, thanks and feel free to add

Acked-by: David Hildenbrand (Arm) <david@kernel.org>

-- 
Cheers,

David
[PATCH] mm: folio_zero_user: open code range computation in folio_zero_user()
Posted by Ankur Arora 1 week, 6 days ago
riscv64-gcc-linux-gnu (v8.5) reports a compile time assert in:

   r[2] = DEFINE_RANGE(clamp_t(s64, fault_idx - radius, pg.start, pg.end),
 		       clamp_t(s64, fault_idx + radius, pg.start, pg.end));

where it decides that pg.start > pg.end in:
  clamp_t(s64, fault_idx + radius, pg.start, pg.end));

where pg comes from:
  const struct range pg = DEFINE_RANGE(0, folio_nr_pages(folio) - 1);

That does not seem like it could be true. Even for pg.start == pg.end,
we would need folio_test_large() to evaluate to false at compile time:

  static inline unsigned long folio_nr_pages(const struct folio *folio)
  {
	if (!folio_test_large(folio))
		return 1;
	return folio_large_nr_pages(folio);
  }

Workaround by open coding the range computation. Also, simplify the type
declarations for the relevant variables.

Reported-by: kernel test robot <lkp@intel.com>
Closes: https://lore.kernel.org/oe-kbuild-all/202601240453.QCjgGdJa-lkp@intel.com/
Signed-off-by: Ankur Arora <ankur.a.arora@oracle.com>
---

Hi Andrew

I'm not certain about linux-next rebasing protocol, but I'm guessing
this patch will be squashed in patch-8 ("mm: folio_zero_user: cache
neighbouring pages").

The commit message doesn't contain anything needing preserving if it is.

Thanks
Ankur

 mm/memory.c | 23 +++++++++++------------
 1 file changed, 11 insertions(+), 12 deletions(-)

diff --git a/mm/memory.c b/mm/memory.c
index ce933ee4a3dd..e49340f51fa9 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -7282,30 +7282,29 @@ static void clear_contig_highpages(struct page *page, unsigned long addr,
 void folio_zero_user(struct folio *folio, unsigned long addr_hint)
 {
 	const unsigned long base_addr = ALIGN_DOWN(addr_hint, folio_size(folio));
-	const long fault_idx = (addr_hint - base_addr) / PAGE_SIZE;
 	const struct range pg = DEFINE_RANGE(0, folio_nr_pages(folio) - 1);
-	const int radius = FOLIO_ZERO_LOCALITY_RADIUS;
+	const long fault_idx = (addr_hint - base_addr) / PAGE_SIZE;
+	const long radius = FOLIO_ZERO_LOCALITY_RADIUS;
 	struct range r[3];
 	int i;
 
 	/*
-	 * Faulting page and its immediate neighbourhood. Will be cleared at the
-	 * end to keep its cachelines hot.
+	 * Faulting page and its immediate neighbourhood. Cleared at the end to
+	 * keep its cachelines hot.
 	 */
-	r[2] = DEFINE_RANGE(clamp_t(s64, fault_idx - radius, pg.start, pg.end),
-			    clamp_t(s64, fault_idx + radius, pg.start, pg.end));
+	r[2] = DEFINE_RANGE(fault_idx - radius < (long)pg.start ? pg.start : fault_idx - radius,
+			    fault_idx + radius > (long)pg.end   ? pg.end   : fault_idx + radius);
 
-	/* Region to the left of the fault */
-	r[1] = DEFINE_RANGE(pg.start,
-			    clamp_t(s64, r[2].start - 1, pg.start - 1, r[2].start));
+
+	/* Region to the left of the fault. */
+	r[1] = DEFINE_RANGE(pg.start, r[2].start - 1);
 
 	/* Region to the right of the fault: always valid for the common fault_idx=0 case. */
-	r[0] = DEFINE_RANGE(clamp_t(s64, r[2].end + 1, r[2].end, pg.end + 1),
-			    pg.end);
+	r[0] = DEFINE_RANGE(r[2].end + 1, pg.end);
 
 	for (i = 0; i < ARRAY_SIZE(r); i++) {
 		const unsigned long addr = base_addr + r[i].start * PAGE_SIZE;
-		const unsigned int nr_pages = range_len(&r[i]);
+		const long nr_pages = (long)range_len(&r[i]);
 		struct page *page = folio_page(folio, r[i].start);
 
 		if (nr_pages > 0)
-- 
2.31.1
Re: [PATCH] mm: folio_zero_user: open code range computation in folio_zero_user()
Posted by David Hildenbrand (Red Hat) 1 week, 5 days ago
On 1/26/26 19:32, Ankur Arora wrote:
> riscv64-gcc-linux-gnu (v8.5) reports a compile time assert in:
> 
>     r[2] = DEFINE_RANGE(clamp_t(s64, fault_idx - radius, pg.start, pg.end),
>   		       clamp_t(s64, fault_idx + radius, pg.start, pg.end));
> 
> where it decides that pg.start > pg.end in:
>    clamp_t(s64, fault_idx + radius, pg.start, pg.end));
> 
> where pg comes from:
>    const struct range pg = DEFINE_RANGE(0, folio_nr_pages(folio) - 1);
> 
> That does not seem like it could be true. Even for pg.start == pg.end,
> we would need folio_test_large() to evaluate to false at compile time:
> 
>    static inline unsigned long folio_nr_pages(const struct folio *folio)
>    {
> 	if (!folio_test_large(folio))
> 		return 1;
> 	return folio_large_nr_pages(folio);
>    }
> 
> Workaround by open coding the range computation. Also, simplify the type
> declarations for the relevant variables.
> 
> Reported-by: kernel test robot <lkp@intel.com>
> Closes: https://lore.kernel.org/oe-kbuild-all/202601240453.QCjgGdJa-lkp@intel.com/
> Signed-off-by: Ankur Arora <ankur.a.arora@oracle.com>
> ---
> 
> Hi Andrew
> 
> I'm not certain about linux-next rebasing protocol, but I'm guessing
> this patch will be squashed in patch-8 ("mm: folio_zero_user: cache
> neighbouring pages").
> 
> The commit message doesn't contain anything needing preserving if it is.
> 
> Thanks
> Ankur
> 
>   mm/memory.c | 23 +++++++++++------------
>   1 file changed, 11 insertions(+), 12 deletions(-)
> 
> diff --git a/mm/memory.c b/mm/memory.c
> index ce933ee4a3dd..e49340f51fa9 100644
> --- a/mm/memory.c
> +++ b/mm/memory.c
> @@ -7282,30 +7282,29 @@ static void clear_contig_highpages(struct page *page, unsigned long addr,
>   void folio_zero_user(struct folio *folio, unsigned long addr_hint)
>   {
>   	const unsigned long base_addr = ALIGN_DOWN(addr_hint, folio_size(folio));
> -	const long fault_idx = (addr_hint - base_addr) / PAGE_SIZE;
>   	const struct range pg = DEFINE_RANGE(0, folio_nr_pages(folio) - 1);
> -	const int radius = FOLIO_ZERO_LOCALITY_RADIUS;
> +	const long fault_idx = (addr_hint - base_addr) / PAGE_SIZE;
> +	const long radius = FOLIO_ZERO_LOCALITY_RADIUS;
>   	struct range r[3];
>   	int i;
>   
>   	/*
> -	 * Faulting page and its immediate neighbourhood. Will be cleared at the
> -	 * end to keep its cachelines hot.
> +	 * Faulting page and its immediate neighbourhood. Cleared at the end to
> +	 * keep its cachelines hot.
>   	 */

Why are there rather unrelated changes in this patch? Like this comment 
change, or the movement of "fualt_idx" declaration above?

-- 
Cheers

David
Re: [PATCH] mm: folio_zero_user: open code range computation in folio_zero_user()
Posted by Ankur Arora 1 week, 5 days ago
David Hildenbrand (Red Hat) <david@kernel.org> writes:

> On 1/26/26 19:32, Ankur Arora wrote:
>> riscv64-gcc-linux-gnu (v8.5) reports a compile time assert in:
>>     r[2] = DEFINE_RANGE(clamp_t(s64, fault_idx - radius, pg.start, pg.end),
>>   		       clamp_t(s64, fault_idx + radius, pg.start, pg.end));
>> where it decides that pg.start > pg.end in:
>>    clamp_t(s64, fault_idx + radius, pg.start, pg.end));
>> where pg comes from:
>>    const struct range pg = DEFINE_RANGE(0, folio_nr_pages(folio) - 1);
>> That does not seem like it could be true. Even for pg.start == pg.end,
>> we would need folio_test_large() to evaluate to false at compile time:
>>    static inline unsigned long folio_nr_pages(const struct folio *folio)
>>    {
>> 	if (!folio_test_large(folio))
>> 		return 1;
>> 	return folio_large_nr_pages(folio);
>>    }
>> Workaround by open coding the range computation. Also, simplify the type
>> declarations for the relevant variables.
>> Reported-by: kernel test robot <lkp@intel.com>
>> Closes: https://lore.kernel.org/oe-kbuild-all/202601240453.QCjgGdJa-lkp@intel.com/
>> Signed-off-by: Ankur Arora <ankur.a.arora@oracle.com>
>> ---
>> Hi Andrew
>> I'm not certain about linux-next rebasing protocol, but I'm guessing
>> this patch will be squashed in patch-8 ("mm: folio_zero_user: cache
>> neighbouring pages").
>> The commit message doesn't contain anything needing preserving if it is.
>> Thanks
>> Ankur
>>   mm/memory.c | 23 +++++++++++------------
>>   1 file changed, 11 insertions(+), 12 deletions(-)
>> diff --git a/mm/memory.c b/mm/memory.c
>> index ce933ee4a3dd..e49340f51fa9 100644
>> --- a/mm/memory.c
>> +++ b/mm/memory.c
>> @@ -7282,30 +7282,29 @@ static void clear_contig_highpages(struct page *page, unsigned long addr,
>>   void folio_zero_user(struct folio *folio, unsigned long addr_hint)
>>   {
>>   	const unsigned long base_addr = ALIGN_DOWN(addr_hint, folio_size(folio));
>> -	const long fault_idx = (addr_hint - base_addr) / PAGE_SIZE;
>>   	const struct range pg = DEFINE_RANGE(0, folio_nr_pages(folio) - 1);
>> -	const int radius = FOLIO_ZERO_LOCALITY_RADIUS;
>> +	const long fault_idx = (addr_hint - base_addr) / PAGE_SIZE;
>> +	const long radius = FOLIO_ZERO_LOCALITY_RADIUS;
>>   	struct range r[3];
>>   	int i;
>>     	/*
>> -	 * Faulting page and its immediate neighbourhood. Will be cleared at the
>> -	 * end to keep its cachelines hot.
>> +	 * Faulting page and its immediate neighbourhood. Cleared at the end to
>> +	 * keep its cachelines hot.
>>   	 */
>
> Why are there rather unrelated changes in this patch? Like this comment change,
> or the movement of "fualt_idx" declaration above?

Yeah, that was a mistake.

--
ankur
Re: [PATCH] mm: folio_zero_user: open code range computation in folio_zero_user()
Posted by David Hildenbrand (Red Hat) 1 week, 4 days ago
On 1/28/26 00:42, Ankur Arora wrote:
> 
> David Hildenbrand (Red Hat) <david@kernel.org> writes:
> 
>> On 1/26/26 19:32, Ankur Arora wrote:
>>> riscv64-gcc-linux-gnu (v8.5) reports a compile time assert in:
>>>      r[2] = DEFINE_RANGE(clamp_t(s64, fault_idx - radius, pg.start, pg.end),
>>>    		       clamp_t(s64, fault_idx + radius, pg.start, pg.end));
>>> where it decides that pg.start > pg.end in:
>>>     clamp_t(s64, fault_idx + radius, pg.start, pg.end));
>>> where pg comes from:
>>>     const struct range pg = DEFINE_RANGE(0, folio_nr_pages(folio) - 1);
>>> That does not seem like it could be true. Even for pg.start == pg.end,
>>> we would need folio_test_large() to evaluate to false at compile time:
>>>     static inline unsigned long folio_nr_pages(const struct folio *folio)
>>>     {
>>> 	if (!folio_test_large(folio))
>>> 		return 1;
>>> 	return folio_large_nr_pages(folio);
>>>     }
>>> Workaround by open coding the range computation. Also, simplify the type
>>> declarations for the relevant variables.
>>> Reported-by: kernel test robot <lkp@intel.com>
>>> Closes: https://lore.kernel.org/oe-kbuild-all/202601240453.QCjgGdJa-lkp@intel.com/
>>> Signed-off-by: Ankur Arora <ankur.a.arora@oracle.com>
>>> ---
>>> Hi Andrew
>>> I'm not certain about linux-next rebasing protocol, but I'm guessing
>>> this patch will be squashed in patch-8 ("mm: folio_zero_user: cache
>>> neighbouring pages").
>>> The commit message doesn't contain anything needing preserving if it is.
>>> Thanks
>>> Ankur
>>>    mm/memory.c | 23 +++++++++++------------
>>>    1 file changed, 11 insertions(+), 12 deletions(-)
>>> diff --git a/mm/memory.c b/mm/memory.c
>>> index ce933ee4a3dd..e49340f51fa9 100644
>>> --- a/mm/memory.c
>>> +++ b/mm/memory.c
>>> @@ -7282,30 +7282,29 @@ static void clear_contig_highpages(struct page *page, unsigned long addr,
>>>    void folio_zero_user(struct folio *folio, unsigned long addr_hint)
>>>    {
>>>    	const unsigned long base_addr = ALIGN_DOWN(addr_hint, folio_size(folio));
>>> -	const long fault_idx = (addr_hint - base_addr) / PAGE_SIZE;
>>>    	const struct range pg = DEFINE_RANGE(0, folio_nr_pages(folio) - 1);
>>> -	const int radius = FOLIO_ZERO_LOCALITY_RADIUS;
>>> +	const long fault_idx = (addr_hint - base_addr) / PAGE_SIZE;
>>> +	const long radius = FOLIO_ZERO_LOCALITY_RADIUS;
>>>    	struct range r[3];
>>>    	int i;
>>>      	/*
>>> -	 * Faulting page and its immediate neighbourhood. Will be cleared at the
>>> -	 * end to keep its cachelines hot.
>>> +	 * Faulting page and its immediate neighbourhood. Cleared at the end to
>>> +	 * keep its cachelines hot.
>>>    	 */
>>
>> Why are there rather unrelated changes in this patch? Like this comment change,
>> or the movement of "fualt_idx" declaration above?
> 
> Yeah, that was a mistake.

Given that we cannot squash and it will be an independent fix, best to 
resend a minimal fix, thanks.

-- 
Cheers

David
Re: [PATCH] mm: folio_zero_user: open code range computation in folio_zero_user()
Posted by Andrew Morton 1 week, 6 days ago
On Mon, 26 Jan 2026 10:32:12 -0800 Ankur Arora <ankur.a.arora@oracle.com> wrote:

> riscv64-gcc-linux-gnu (v8.5) reports a compile time assert in:
> 
>    r[2] = DEFINE_RANGE(clamp_t(s64, fault_idx - radius, pg.start, pg.end),
>  		       clamp_t(s64, fault_idx + radius, pg.start, pg.end));
> 
> where it decides that pg.start > pg.end in:
>   clamp_t(s64, fault_idx + radius, pg.start, pg.end));
> 
> where pg comes from:
>   const struct range pg = DEFINE_RANGE(0, folio_nr_pages(folio) - 1);
> 
> That does not seem like it could be true. Even for pg.start == pg.end,
> we would need folio_test_large() to evaluate to false at compile time:
> 
>   static inline unsigned long folio_nr_pages(const struct folio *folio)
>   {
> 	if (!folio_test_large(folio))
> 		return 1;
> 	return folio_large_nr_pages(folio);
>   }
> 
> Workaround by open coding the range computation. Also, simplify the type
> declarations for the relevant variables.

Thanks.  It's a shame.

gcc-8.50 is five years old.  Documentation/Changes says we support 8.1.

> I'm not certain about linux-next rebasing protocol, but I'm guessing
> this patch will be squashed in patch-8 ("mm: folio_zero_user: cache
> neighbouring pages").

If the base patch was in mm-unstable then I'd squash.  But it is now in
the allegedly non-rebasing mm-stable so I'll queue this into
mm-unstable->mm-stable as a separate thing, with

Fixes: 93552c9a3350 ("mm: folio_zero_user: cache neighbouring pages")

So there will be a bisection hole for riscv people who use an ancient
compiler, shrug.

>  mm/memory.c | 23 +++++++++++------------
>  1 file changed, 11 insertions(+), 12 deletions(-)

We could of course revert this when we're able to confirm that the
currently-supported gcc versions all handle it OK.