mm/memory.c | 15 +++++++-------- 1 file changed, 7 insertions(+), 8 deletions(-)
riscv64-gcc-linux-gnu (v8.5) reports a compile time assert in:
r[2] = DEFINE_RANGE(clamp_t(s64, fault_idx - radius, pg.start, pg.end),
clamp_t(s64, fault_idx + radius, pg.start, pg.end));
where it decides that pg.start > pg.end in:
clamp_t(s64, fault_idx + radius, pg.start, pg.end));
where pg comes from:
const struct range pg = DEFINE_RANGE(0, folio_nr_pages(folio) - 1);
That does not seem like it could be true. Even for pg.start == pg.end,
we would need folio_test_large() to evaluate to false at compile time:
static inline unsigned long folio_nr_pages(const struct folio *folio)
{
if (!folio_test_large(folio))
return 1;
return folio_large_nr_pages(folio);
}
Workaround by open coding the range computation. Also, simplify the type
declarations for the relevant variables.
Reported-by: kernel test robot <lkp@intel.com>
Closes: https://lore.kernel.org/oe-kbuild-all/202601240453.QCjgGdJa-lkp@intel.com/
Fixes: 93552c9a3350 ("mm: folio_zero_user: cache neighbouring pages")
Signed-off-by: Ankur Arora <ankur.a.arora@oracle.com>
Acked-by: David Hildenbrand (Arm) <david@kernel.org>
---
Hi Andrew
This version removes an unnecessary cast introduced in v2, which gets
rid of this hunk:
if (nr_pages > 0)
- clear_contig_highpages(page, addr, nr_pages);
+ clear_contig_highpages(page, addr, (unsigned int)nr_pages);
Could you queue this one instead?
Thanks
Ankur
mm/memory.c | 15 +++++++--------
1 file changed, 7 insertions(+), 8 deletions(-)
diff --git a/mm/memory.c b/mm/memory.c
index ce933ee4a3dd..b15f11a0bfa8 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -7284,7 +7284,7 @@ void folio_zero_user(struct folio *folio, unsigned long addr_hint)
const unsigned long base_addr = ALIGN_DOWN(addr_hint, folio_size(folio));
const long fault_idx = (addr_hint - base_addr) / PAGE_SIZE;
const struct range pg = DEFINE_RANGE(0, folio_nr_pages(folio) - 1);
- const int radius = FOLIO_ZERO_LOCALITY_RADIUS;
+ const long radius = FOLIO_ZERO_LOCALITY_RADIUS;
struct range r[3];
int i;
@@ -7292,20 +7292,19 @@ void folio_zero_user(struct folio *folio, unsigned long addr_hint)
* Faulting page and its immediate neighbourhood. Will be cleared at the
* end to keep its cachelines hot.
*/
- r[2] = DEFINE_RANGE(clamp_t(s64, fault_idx - radius, pg.start, pg.end),
- clamp_t(s64, fault_idx + radius, pg.start, pg.end));
+ r[2] = DEFINE_RANGE(fault_idx - radius < (long)pg.start ? pg.start : fault_idx - radius,
+ fault_idx + radius > (long)pg.end ? pg.end : fault_idx + radius);
+
/* Region to the left of the fault */
- r[1] = DEFINE_RANGE(pg.start,
- clamp_t(s64, r[2].start - 1, pg.start - 1, r[2].start));
+ r[1] = DEFINE_RANGE(pg.start, r[2].start - 1);
/* Region to the right of the fault: always valid for the common fault_idx=0 case. */
- r[0] = DEFINE_RANGE(clamp_t(s64, r[2].end + 1, r[2].end, pg.end + 1),
- pg.end);
+ r[0] = DEFINE_RANGE(r[2].end + 1, pg.end);
for (i = 0; i < ARRAY_SIZE(r); i++) {
const unsigned long addr = base_addr + r[i].start * PAGE_SIZE;
- const unsigned int nr_pages = range_len(&r[i]);
+ const long nr_pages = (long)range_len(&r[i]);
struct page *page = folio_page(folio, r[i].start);
if (nr_pages > 0)
--
2.31.1
© 2016 - 2026 Red Hat, Inc.