Replace hugepage boundary computation with ALIGN() helper instead of
an open coded expression. This helps to improves code readability.
This was flagged by Coccinelle (misc/minmax.cocci) as an opportunity
to use min(), after which the boundary computation was updated following
review suggestions.
Found by: make coccicheck MODE=report M=mm/
No functional change intended.
Acked-by: David Hildenbrand (Red Hat) <david@kernel.org>
Suggested-by: David Hildenbrand (Red Hat) <david@kernel.org>
Suggested-by: Matthew Wilcox <willy@infradead.org>
Suggested-by: David Laight <david.laight.linux@gmail.com>
Signed-off-by: Sahil Chandna <chandna.sahil@gmail.com>
---
mm/pagewalk.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/mm/pagewalk.c b/mm/pagewalk.c
index 9f91cf85a5be..9fd59d517f37 100644
--- a/mm/pagewalk.c
+++ b/mm/pagewalk.c
@@ -312,8 +312,7 @@ static int walk_pgd_range(unsigned long addr, unsigned long end,
static unsigned long hugetlb_entry_end(struct hstate *h, unsigned long addr,
unsigned long end)
{
- unsigned long boundary = (addr & huge_page_mask(h)) + huge_page_size(h);
- return boundary < end ? boundary : end;
+ return min(ALIGN(addr, huge_page_size(h)), end);
}
static int walk_hugetlb_range(unsigned long addr, unsigned long end,
--
2.50.1
Hi Andrew,
On 2025/11/28 15:01, Sahil Chandna wrote:
> Replace hugepage boundary computation with ALIGN() helper instead of
> an open coded expression. This helps to improves code readability.
>
> This was flagged by Coccinelle (misc/minmax.cocci) as an opportunity
> to use min(), after which the boundary computation was updated following
> review suggestions.
>
> Found by: make coccicheck MODE=report M=mm/
> No functional change intended.
>
> Acked-by: David Hildenbrand (Red Hat) <david@kernel.org>
> Suggested-by: David Hildenbrand (Red Hat) <david@kernel.org>
> Suggested-by: Matthew Wilcox <willy@infradead.org>
> Suggested-by: David Laight <david.laight.linux@gmail.com>
> Signed-off-by: Sahil Chandna <chandna.sahil@gmail.com>
> ---
> mm/pagewalk.c | 3 +--
> 1 file changed, 1 insertion(+), 2 deletions(-)
>
> diff --git a/mm/pagewalk.c b/mm/pagewalk.c
> index 9f91cf85a5be..9fd59d517f37 100644
> --- a/mm/pagewalk.c
> +++ b/mm/pagewalk.c
> @@ -312,8 +312,7 @@ static int walk_pgd_range(unsigned long addr, unsigned long end,
> static unsigned long hugetlb_entry_end(struct hstate *h, unsigned long addr,
> unsigned long end)
> {
> - unsigned long boundary = (addr & huge_page_mask(h)) + huge_page_size(h);
> - return boundary < end ? boundary : end;
> + return min(ALIGN(addr, huge_page_size(h)), end);
> }
Please drop this patch from the mm-new branch, as it causes
'run_vmtests.sh' to hang. Specifically, it leads to the system hanging
when executing hugepage-vmemmap test, because the program falls into an
infinite loop in walk_hugetlb_range() and cannot break out.
This patch does introduce functional changes and makes an incorrect
assumption that the 'end' must be aligned to the hugepage size. However,
this is not necessarily the case. For example, see how pagemap_read()
calculates the 'end':
"
end = start_vaddr + ((count / PM_ENTRY_BYTES) << PAGE_SHIFT);
"
Revert this patch, mm selftests work well.
From: Lance Yang <lance.yang@linux.dev>
On Wed, 24 Dec 2025 15:50:34 +0800, Baolin Wang wrote:
> Hi Andrew,
>
> On 2025/11/28 15:01, Sahil Chandna wrote:
> > Replace hugepage boundary computation with ALIGN() helper instead of
> > an open coded expression. This helps to improves code readability.
> >
> > This was flagged by Coccinelle (misc/minmax.cocci) as an opportunity
> > to use min(), after which the boundary computation was updated following
> > review suggestions.
> >
> > Found by: make coccicheck MODE=report M=mm/
> > No functional change intended.
> >
> > Acked-by: David Hildenbrand (Red Hat) <david@kernel.org>
> > Suggested-by: David Hildenbrand (Red Hat) <david@kernel.org>
> > Suggested-by: Matthew Wilcox <willy@infradead.org>
> > Suggested-by: David Laight <david.laight.linux@gmail.com>
> > Signed-off-by: Sahil Chandna <chandna.sahil@gmail.com>
> > ---
> > mm/pagewalk.c | 3 +--
> > 1 file changed, 1 insertion(+), 2 deletions(-)
> >
> > diff --git a/mm/pagewalk.c b/mm/pagewalk.c
> > index 9f91cf85a5be..9fd59d517f37 100644
> > --- a/mm/pagewalk.c
> > +++ b/mm/pagewalk.c
> > @@ -312,8 +312,7 @@ static int walk_pgd_range(unsigned long addr, unsigned long end,
> > static unsigned long hugetlb_entry_end(struct hstate *h, unsigned long addr,
> > unsigned long end)
> > {
> > - unsigned long boundary = (addr & huge_page_mask(h)) + huge_page_size(h);
> > - return boundary < end ? boundary : end;
> > + return min(ALIGN(addr, huge_page_size(h)), end);
> > }
>
> Please drop this patch from the mm-new branch, as it causes
> 'run_vmtests.sh' to hang. Specifically, it leads to the system hanging
> when executing hugepage-vmemmap test, because the program falls into an
> infinite loop in walk_hugetlb_range() and cannot break out.
Good catch! The problem is that ALIGN() returns addr itself when already
aligned, causing the infinite loop ...
>
> This patch does introduce functional changes and makes an incorrect
> assumption that the 'end' must be aligned to the hugepage size. However,
Yep. This patch is not equivalent to the original code when addr is
already aligned :)
> this is not necessarily the case. For example, see how pagemap_read()
> calculates the 'end':
>
> "
> end = start_vaddr + ((count / PM_ENTRY_BYTES) << PAGE_SHIFT);
> "
>
> Revert this patch, mm selftests work well.
On Wed, 24 Dec 2025 17:23:32 +0800
Lance Yang <ioworker0@gmail.com> wrote:
> From: Lance Yang <lance.yang@linux.dev>
>
>
> On Wed, 24 Dec 2025 15:50:34 +0800, Baolin Wang wrote:
> > Hi Andrew,
> >
> > On 2025/11/28 15:01, Sahil Chandna wrote:
> > > Replace hugepage boundary computation with ALIGN() helper instead of
> > > an open coded expression. This helps to improves code readability.
> > >
> > > This was flagged by Coccinelle (misc/minmax.cocci) as an opportunity
> > > to use min(), after which the boundary computation was updated following
> > > review suggestions.
> > >
> > > Found by: make coccicheck MODE=report M=mm/
> > > No functional change intended.
> > >
> > > Acked-by: David Hildenbrand (Red Hat) <david@kernel.org>
> > > Suggested-by: David Hildenbrand (Red Hat) <david@kernel.org>
> > > Suggested-by: Matthew Wilcox <willy@infradead.org>
> > > Suggested-by: David Laight <david.laight.linux@gmail.com>
> > > Signed-off-by: Sahil Chandna <chandna.sahil@gmail.com>
> > > ---
> > > mm/pagewalk.c | 3 +--
> > > 1 file changed, 1 insertion(+), 2 deletions(-)
> > >
> > > diff --git a/mm/pagewalk.c b/mm/pagewalk.c
> > > index 9f91cf85a5be..9fd59d517f37 100644
> > > --- a/mm/pagewalk.c
> > > +++ b/mm/pagewalk.c
> > > @@ -312,8 +312,7 @@ static int walk_pgd_range(unsigned long addr, unsigned long end,
> > > static unsigned long hugetlb_entry_end(struct hstate *h, unsigned long addr,
> > > unsigned long end)
> > > {
> > > - unsigned long boundary = (addr & huge_page_mask(h)) + huge_page_size(h);
> > > - return boundary < end ? boundary : end;
> > > + return min(ALIGN(addr, huge_page_size(h)), end);
> > > }
> >
> > Please drop this patch from the mm-new branch, as it causes
> > 'run_vmtests.sh' to hang. Specifically, it leads to the system hanging
> > when executing hugepage-vmemmap test, because the program falls into an
> > infinite loop in walk_hugetlb_range() and cannot break out.
>
> Good catch! The problem is that ALIGN() returns addr itself when already
> aligned, causing the infinite loop ...
Using ALIGN(addr + 1, huge_page_size(h)) would work.
Although it could be (addr + 1) & ~huge_page_mask(h) which is probably
the easiest to understand.
Some of the 'helper' macros don't really make the code easier to read.
(And that includes a lot of uses of min().)
David
>
> >
> > This patch does introduce functional changes and makes an incorrect
> > assumption that the 'end' must be aligned to the hugepage size. However,
>
> Yep. This patch is not equivalent to the original code when addr is
> already aligned :)
>
> > this is not necessarily the case. For example, see how pagemap_read()
> > calculates the 'end':
> >
> > "
> > end = start_vaddr + ((count / PM_ENTRY_BYTES) << PAGE_SHIFT);
> > "
> >
> > Revert this patch, mm selftests work well.
On Wed, Dec 24, 2025 at 02:08:29PM +0000, David Laight wrote:
> > > > +++ b/mm/pagewalk.c
> > > > @@ -312,8 +312,7 @@ static int walk_pgd_range(unsigned long addr, unsigned long end,
> > > > static unsigned long hugetlb_entry_end(struct hstate *h, unsigned long addr,
> > > > unsigned long end)
> > > > {
> > > > - unsigned long boundary = (addr & huge_page_mask(h)) + huge_page_size(h);
> > > > - return boundary < end ? boundary : end;
> > > > + return min(ALIGN(addr, huge_page_size(h)), end);
> > > > }
> > >
> > > Please drop this patch from the mm-new branch, as it causes
> > > 'run_vmtests.sh' to hang. Specifically, it leads to the system hanging
> > > when executing hugepage-vmemmap test, because the program falls into an
> > > infinite loop in walk_hugetlb_range() and cannot break out.
> >
> > Good catch! The problem is that ALIGN() returns addr itself when already
> > aligned, causing the infinite loop ...
>
> Using ALIGN(addr + 1, huge_page_size(h)) would work.
> Although it could be (addr + 1) & ~huge_page_mask(h) which is probably
> the easiest to understand.
> Some of the 'helper' macros don't really make the code easier to read.
> (And that includes a lot of uses of min().)
Or we could go back to my original suggestion.
https://lore.kernel.org/linux-mm/aRyOWrARRlUCeEz6@casper.infradead.org/
which was in v2:
https://lore.kernel.org/linux-mm/f802959f58865371ba1b10081bced98e3784c5e4.1763796152.git.chandna.sahil@gmail.com/
On 12/24/25 19:06, Matthew Wilcox wrote:
> On Wed, Dec 24, 2025 at 02:08:29PM +0000, David Laight wrote:
>>>>> +++ b/mm/pagewalk.c
>>>>> @@ -312,8 +312,7 @@ static int walk_pgd_range(unsigned long addr, unsigned long end,
>>>>> static unsigned long hugetlb_entry_end(struct hstate *h, unsigned long addr,
>>>>> unsigned long end)
>>>>> {
>>>>> - unsigned long boundary = (addr & huge_page_mask(h)) + huge_page_size(h);
>>>>> - return boundary < end ? boundary : end;
>>>>> + return min(ALIGN(addr, huge_page_size(h)), end);
>>>>> }
>>>>
>>>> Please drop this patch from the mm-new branch, as it causes
>>>> 'run_vmtests.sh' to hang. Specifically, it leads to the system hanging
>>>> when executing hugepage-vmemmap test, because the program falls into an
>>>> infinite loop in walk_hugetlb_range() and cannot break out.
>>>
>>> Good catch! The problem is that ALIGN() returns addr itself when already
>>> aligned, causing the infinite loop ...
>>
>> Using ALIGN(addr + 1, huge_page_size(h)) would work.
>> Although it could be (addr + 1) & ~huge_page_mask(h) which is probably
>> the easiest to understand.
>> Some of the 'helper' macros don't really make the code easier to read.
>> (And that includes a lot of uses of min().)
>
> Or we could go back to my original suggestion.
>
> https://lore.kernel.org/linux-mm/aRyOWrARRlUCeEz6@casper.infradead.org/
>
> which was in v2:
>
> https://lore.kernel.org/linux-mm/f802959f58865371ba1b10081bced98e3784c5e4.1763796152.git.chandna.sahil@gmail.com/
I'm starting to wonder whether we should just leave that code alone :)
--
Cheers
David
On Thu, 25 Dec 2025 10:32:46 +0100
"David Hildenbrand (Red Hat)" <david@kernel.org> wrote:
> On 12/24/25 19:06, Matthew Wilcox wrote:
> > On Wed, Dec 24, 2025 at 02:08:29PM +0000, David Laight wrote:
> >>>>> +++ b/mm/pagewalk.c
> >>>>> @@ -312,8 +312,7 @@ static int walk_pgd_range(unsigned long addr, unsigned long end,
> >>>>> static unsigned long hugetlb_entry_end(struct hstate *h, unsigned long addr,
> >>>>> unsigned long end)
> >>>>> {
> >>>>> - unsigned long boundary = (addr & huge_page_mask(h)) + huge_page_size(h);
> >>>>> - return boundary < end ? boundary : end;
> >>>>> + return min(ALIGN(addr, huge_page_size(h)), end);
> >>>>> }
> >>>>
> >>>> Please drop this patch from the mm-new branch, as it causes
> >>>> 'run_vmtests.sh' to hang. Specifically, it leads to the system hanging
> >>>> when executing hugepage-vmemmap test, because the program falls into an
> >>>> infinite loop in walk_hugetlb_range() and cannot break out.
> >>>
> >>> Good catch! The problem is that ALIGN() returns addr itself when already
> >>> aligned, causing the infinite loop ...
> >>
> >> Using ALIGN(addr + 1, huge_page_size(h)) would work.
> >> Although it could be (addr + 1) & ~huge_page_mask(h) which is probably
> >> the easiest to understand.
> >> Some of the 'helper' macros don't really make the code easier to read.
> >> (And that includes a lot of uses of min().)
> >
> > Or we could go back to my original suggestion.
> >
> > https://lore.kernel.org/linux-mm/aRyOWrARRlUCeEz6@casper.infradead.org/
> >
> > which was in v2:
> >
> > https://lore.kernel.org/linux-mm/f802959f58865371ba1b10081bced98e3784c5e4.1763796152.git.chandna.sahil@gmail.com/
>
> I'm starting to wonder whether we should just leave that code alone :)
>
Maybe 'we' should stop checkpatch (etc) suggesting min() in trivial
cases. It doesn't really make the code better.
David
On Thu, Dec 25, 2025 at 10:01:06AM +0000, David Laight wrote:
>On Thu, 25 Dec 2025 10:32:46 +0100
>"David Hildenbrand (Red Hat)" <david@kernel.org> wrote:
>
>> On 12/24/25 19:06, Matthew Wilcox wrote:
>> > On Wed, Dec 24, 2025 at 02:08:29PM +0000, David Laight wrote:
>> >>>>> +++ b/mm/pagewalk.c
>> >>>>> @@ -312,8 +312,7 @@ static int walk_pgd_range(unsigned long addr, unsigned long end,
>> >>>>> static unsigned long hugetlb_entry_end(struct hstate *h, unsigned long addr,
>> >>>>> unsigned long end)
>> >>>>> {
>> >>>>> - unsigned long boundary = (addr & huge_page_mask(h)) + huge_page_size(h);
>> >>>>> - return boundary < end ? boundary : end;
>> >>>>> + return min(ALIGN(addr, huge_page_size(h)), end);
>> >>>>> }
>> >>>>
>> >>>> Please drop this patch from the mm-new branch, as it causes
>> >>>> 'run_vmtests.sh' to hang. Specifically, it leads to the system hanging
>> >>>> when executing hugepage-vmemmap test, because the program falls into an
>> >>>> infinite loop in walk_hugetlb_range() and cannot break out.
>> >>>
>> >>> Good catch! The problem is that ALIGN() returns addr itself when already
>> >>> aligned, causing the infinite loop ...
>> >>
>> >> Using ALIGN(addr + 1, huge_page_size(h)) would work.
>> >> Although it could be (addr + 1) & ~huge_page_mask(h) which is probably
>> >> the easiest to understand.
>> >> Some of the 'helper' macros don't really make the code easier to read.
>> >> (And that includes a lot of uses of min().)
>> >
>> > Or we could go back to my original suggestion.
>> >
>> > https://lore.kernel.org/linux-mm/aRyOWrARRlUCeEz6@casper.infradead.org/
>> >
>> > which was in v2:
>> >
>> > https://lore.kernel.org/linux-mm/f802959f58865371ba1b10081bced98e3784c5e4.1763796152.git.chandna.sahil@gmail.com/
>>
>> I'm starting to wonder whether we should just leave that code alone :)
>>
>
>Maybe 'we' should stop checkpatch (etc) suggesting min() in trivial
>cases. It doesn't really make the code better.
>
> David
Thank you for feedback, I dropped this patch this has been resubmitted [1].
This next two patches in this series were also dropped due to error in this
patch. Requesting feedback, if I can re-submit the other 2 patches which use
the "%pe" printk format specifier ?
Sharing reference to patches [2] and [3] below.
Thanks,
Sahil
[1] https://lore.kernel.org/all/39f4490a-d713-44a8-a1d7-3568b01b3dc2@kernel.org/
[2] https://lore.kernel.org/all/2c842a64fddeb0fe0cac087783aaedd97edc3191.1764312627.git.chandna.sahil@gmail.com/
[3] https://lore.kernel.org/all/6d729a60eb71baade3670e5bb609a068683af3eb.1764312627.git.chandna.sahil@gmail.com/
On Wed, Dec 24, 2025 at 05:23:32PM +0800, Lance Yang wrote:
>From: Lance Yang <lance.yang@linux.dev>
>
>
>On Wed, 24 Dec 2025 15:50:34 +0800, Baolin Wang wrote:
>> Hi Andrew,
>>
>> On 2025/11/28 15:01, Sahil Chandna wrote:
>> > Replace hugepage boundary computation with ALIGN() helper instead of
>> > an open coded expression. This helps to improves code readability.
>> >
>> > This was flagged by Coccinelle (misc/minmax.cocci) as an opportunity
>> > to use min(), after which the boundary computation was updated following
>> > review suggestions.
>> >
>> > Found by: make coccicheck MODE=report M=mm/
>> > No functional change intended.
>> >
>> > Acked-by: David Hildenbrand (Red Hat) <david@kernel.org>
>> > Suggested-by: David Hildenbrand (Red Hat) <david@kernel.org>
>> > Suggested-by: Matthew Wilcox <willy@infradead.org>
>> > Suggested-by: David Laight <david.laight.linux@gmail.com>
>> > Signed-off-by: Sahil Chandna <chandna.sahil@gmail.com>
>> > ---
>> > mm/pagewalk.c | 3 +--
>> > 1 file changed, 1 insertion(+), 2 deletions(-)
>> >
>> > diff --git a/mm/pagewalk.c b/mm/pagewalk.c
>> > index 9f91cf85a5be..9fd59d517f37 100644
>> > --- a/mm/pagewalk.c
>> > +++ b/mm/pagewalk.c
>> > @@ -312,8 +312,7 @@ static int walk_pgd_range(unsigned long addr, unsigned long end,
>> > static unsigned long hugetlb_entry_end(struct hstate *h, unsigned long addr,
>> > unsigned long end)
>> > {
>> > - unsigned long boundary = (addr & huge_page_mask(h)) + huge_page_size(h);
>> > - return boundary < end ? boundary : end;
>> > + return min(ALIGN(addr, huge_page_size(h)), end);
>> > }
>>
>> Please drop this patch from the mm-new branch, as it causes
>> 'run_vmtests.sh' to hang. Specifically, it leads to the system hanging
>> when executing hugepage-vmemmap test, because the program falls into an
>> infinite loop in walk_hugetlb_range() and cannot break out.
>
>Good catch! The problem is that ALIGN() returns addr itself when already
>aligned, causing the infinite loop ...
>
>>
>> This patch does introduce functional changes and makes an incorrect
>> assumption that the 'end' must be aligned to the hugepage size. However,
>
>Yep. This patch is not equivalent to the original code when addr is
>already aligned :)
>
>> this is not necessarily the case. For example, see how pagemap_read()
>> calculates the 'end':
>>
>> "
>> end = start_vaddr + ((count / PM_ENTRY_BYTES) << PAGE_SHIFT);
>> "
>>
>> Revert this patch, mm selftests work well.
Hi Baolin, Lance, Andrew,
Thanks for catching this, I understand why ALIGN() caused an infinite loop.
Please drop this patch, or shall I submit the revert ?
Apologies for this, I am setting up environment for running the selftest
and will send out corrected patch once it successfully pass mm selftests.
Regards,
Sahil
© 2016 - 2026 Red Hat, Inc.