if (map_chg) is always true, since it is nested in another if statement
which checks for it already. Remove the check and un-indent for readability.
if (unlikely(map_chg == MAP_CHG_NEEDED && retval == 0)) {
...
if (map_chg) {
...
}
}
No functional change intended.
Signed-off-by: Joshua Hahn <joshua.hahnjy@gmail.com>
---
mm/hugetlb.c | 12 +++++-------
1 file changed, 5 insertions(+), 7 deletions(-)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 88b9e997c9da..432a5054ca1d 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -3019,13 +3019,11 @@ struct folio *alloc_hugetlb_folio(struct vm_area_struct *vma,
rsv_adjust = hugepage_subpool_put_pages(spool, 1);
hugetlb_acct_memory(h, -rsv_adjust);
- if (map_chg) {
- spin_lock_irq(&hugetlb_lock);
- hugetlb_cgroup_uncharge_folio_rsvd(
- hstate_index(h), pages_per_huge_page(h),
- folio);
- spin_unlock_irq(&hugetlb_lock);
- }
+ spin_lock_irq(&hugetlb_lock);
+ hugetlb_cgroup_uncharge_folio_rsvd(
+ hstate_index(h), pages_per_huge_page(h),
+ folio);
+ spin_unlock_irq(&hugetlb_lock);
}
}
--
2.47.3
On Thu, 15 Jan 2026 13:14:36 -0500 Joshua Hahn <joshua.hahnjy@gmail.com> wrote:
> if (map_chg) is always true, since it is nested in another if statement
> which checks for it already. Remove the check and un-indent for readability.
>
> if (unlikely(map_chg == MAP_CHG_NEEDED && retval == 0)) {
> ...
>
> if (map_chg) {
> ...
> }
> }
>
> No functional change intended.
>
> Signed-off-by: Joshua Hahn <joshua.hahnjy@gmail.com>
Reviewed-by: SeongJae Park <sj@kernel.org>
> ---
> mm/hugetlb.c | 12 +++++-------
> 1 file changed, 5 insertions(+), 7 deletions(-)
>
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index 88b9e997c9da..432a5054ca1d 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -3019,13 +3019,11 @@ struct folio *alloc_hugetlb_folio(struct vm_area_struct *vma,
>
> rsv_adjust = hugepage_subpool_put_pages(spool, 1);
> hugetlb_acct_memory(h, -rsv_adjust);
> - if (map_chg) {
> - spin_lock_irq(&hugetlb_lock);
> - hugetlb_cgroup_uncharge_folio_rsvd(
> - hstate_index(h), pages_per_huge_page(h),
> - folio);
> - spin_unlock_irq(&hugetlb_lock);
> - }
> + spin_lock_irq(&hugetlb_lock);
> + hugetlb_cgroup_uncharge_folio_rsvd(
> + hstate_index(h), pages_per_huge_page(h),
> + folio);
Nit. Good chance to reduce one more line by putting 'folio' on the upper line?
> + spin_unlock_irq(&hugetlb_lock);
> }
> }
>
> --
> 2.47.3
Thanks,
SJ
On Thu, 15 Jan 2026 17:39:39 -0800 SeongJae Park <sj@kernel.org> wrote:
> On Thu, 15 Jan 2026 13:14:36 -0500 Joshua Hahn <joshua.hahnjy@gmail.com> wrote:
>
> > if (map_chg) is always true, since it is nested in another if statement
> > which checks for it already. Remove the check and un-indent for readability.
> >
> > if (unlikely(map_chg == MAP_CHG_NEEDED && retval == 0)) {
> > ...
> >
> > if (map_chg) {
> > ...
> > }
> > }
> >
> > No functional change intended.
> >
> > Signed-off-by: Joshua Hahn <joshua.hahnjy@gmail.com>
>
> Reviewed-by: SeongJae Park <sj@kernel.org>
Thank you, SJ!
> > ---
> > mm/hugetlb.c | 12 +++++-------
> > 1 file changed, 5 insertions(+), 7 deletions(-)
> >
> > diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> > index 88b9e997c9da..432a5054ca1d 100644
> > --- a/mm/hugetlb.c
> > +++ b/mm/hugetlb.c
> > @@ -3019,13 +3019,11 @@ struct folio *alloc_hugetlb_folio(struct vm_area_struct *vma,
> >
> > rsv_adjust = hugepage_subpool_put_pages(spool, 1);
> > hugetlb_acct_memory(h, -rsv_adjust);
> > - if (map_chg) {
> > - spin_lock_irq(&hugetlb_lock);
> > - hugetlb_cgroup_uncharge_folio_rsvd(
> > - hstate_index(h), pages_per_huge_page(h),
> > - folio);
> > - spin_unlock_irq(&hugetlb_lock);
> > - }
> > + spin_lock_irq(&hugetlb_lock);
> > + hugetlb_cgroup_uncharge_folio_rsvd(
> > + hstate_index(h), pages_per_huge_page(h),
> > + folio);
>
> Nit. Good chance to reduce one more line by putting 'folio' on the upper line?
Ack, will include in the next version of this : -) I hope you have a great day!
Joshua
On 1/15/26 19:14, Joshua Hahn wrote:
> if (map_chg) is always true, since it is nested in another if statement
> which checks for it already. Remove the check and un-indent for readability.
>
> if (unlikely(map_chg == MAP_CHG_NEEDED && retval == 0)) {
> ...
>
> if (map_chg) {
> ...
> }
> }
>
> No functional change intended.
>
> Signed-off-by: Joshua Hahn <joshua.hahnjy@gmail.com>
> ---
> mm/hugetlb.c | 12 +++++-------
> 1 file changed, 5 insertions(+), 7 deletions(-)
>
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index 88b9e997c9da..432a5054ca1d 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -3019,13 +3019,11 @@ struct folio *alloc_hugetlb_folio(struct vm_area_struct *vma,
>
> rsv_adjust = hugepage_subpool_put_pages(spool, 1);
> hugetlb_acct_memory(h, -rsv_adjust);
> - if (map_chg) {
> - spin_lock_irq(&hugetlb_lock);
> - hugetlb_cgroup_uncharge_folio_rsvd(
> - hstate_index(h), pages_per_huge_page(h),
> - folio);
> - spin_unlock_irq(&hugetlb_lock);
> - }
> + spin_lock_irq(&hugetlb_lock);
> + hugetlb_cgroup_uncharge_folio_rsvd(
> + hstate_index(h), pages_per_huge_page(h),
> + folio);
> + spin_unlock_irq(&hugetlb_lock);
> }
> }
>
MAP_CHG_NEEDED = 1
Acked-by: David Hildenbrand (Red Hat) <david@kernel.org>
--
Cheers
David
On Thu, 15 Jan 2026 21:14:15 +0100 "David Hildenbrand (Red Hat)" <david@kernel.org> wrote:
> On 1/15/26 19:14, Joshua Hahn wrote:
> > if (map_chg) is always true, since it is nested in another if statement
> > which checks for it already. Remove the check and un-indent for readability.
> >
> > if (unlikely(map_chg == MAP_CHG_NEEDED && retval == 0)) {
> > ...
> >
> > if (map_chg) {
> > ...
> > }
> > }
> >
> > No functional change intended.
> >
> > Signed-off-by: Joshua Hahn <joshua.hahnjy@gmail.com>
> > ---
> > mm/hugetlb.c | 12 +++++-------
> > 1 file changed, 5 insertions(+), 7 deletions(-)
> >
> > diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> > index 88b9e997c9da..432a5054ca1d 100644
> > --- a/mm/hugetlb.c
> > +++ b/mm/hugetlb.c
> > @@ -3019,13 +3019,11 @@ struct folio *alloc_hugetlb_folio(struct vm_area_struct *vma,
> >
> > rsv_adjust = hugepage_subpool_put_pages(spool, 1);
> > hugetlb_acct_memory(h, -rsv_adjust);
> > - if (map_chg) {
> > - spin_lock_irq(&hugetlb_lock);
> > - hugetlb_cgroup_uncharge_folio_rsvd(
> > - hstate_index(h), pages_per_huge_page(h),
> > - folio);
> > - spin_unlock_irq(&hugetlb_lock);
> > - }
> > + spin_lock_irq(&hugetlb_lock);
> > + hugetlb_cgroup_uncharge_folio_rsvd(
> > + hstate_index(h), pages_per_huge_page(h),
> > + folio);
> > + spin_unlock_irq(&hugetlb_lock);
> > }
> > }
> >
>
> MAP_CHG_NEEDED = 1
>
> Acked-by: David Hildenbrand (Red Hat) <david@kernel.org>
Thank you David, I agree it's helpful to include MAP_CHG_NEEDED != 0.
Will add in the v2! Have a great day : -)
Joshua
On Thu, 15 Jan 2026 16:10:51 -0500 Joshua Hahn <joshua.hahnjy@gmail.com> wrote: > > > - spin_unlock_irq(&hugetlb_lock); > > > - } > > > + spin_lock_irq(&hugetlb_lock); > > > + hugetlb_cgroup_uncharge_folio_rsvd( > > > + hstate_index(h), pages_per_huge_page(h), > > > + folio); > > > + spin_unlock_irq(&hugetlb_lock); > > > } > > > } > > > > > > > MAP_CHG_NEEDED = 1 > > > > Acked-by: David Hildenbrand (Red Hat) <david@kernel.org> > > Thank you David, I agree it's helpful to include MAP_CHG_NEEDED != 0. Oh. > Will add in the v2! Have a great day : -) While in there, this: > > > + hugetlb_cgroup_uncharge_folio_rsvd( > > > + hstate_index(h), pages_per_huge_page(h), > > > + folio); contains now-less-needed 80-column party tricks.
On Thu, 15 Jan 2026 13:59:42 -0800 Andrew Morton <akpm@linux-foundation.org> wrote: > On Thu, 15 Jan 2026 16:10:51 -0500 Joshua Hahn <joshua.hahnjy@gmail.com> wrote: > > > > > - spin_unlock_irq(&hugetlb_lock); > > > > - } > > > > + spin_lock_irq(&hugetlb_lock); > > > > + hugetlb_cgroup_uncharge_folio_rsvd( > > > > + hstate_index(h), pages_per_huge_page(h), > > > > + folio); > > > > + spin_unlock_irq(&hugetlb_lock); > > > > } > > > > } > > > > > > > > > > MAP_CHG_NEEDED = 1 > > > > > > Acked-by: David Hildenbrand (Red Hat) <david@kernel.org> > > > > Thank you David, I agree it's helpful to include MAP_CHG_NEEDED != 0. > > Oh. > > > Will add in the v2! Have a great day : -) > > While in there, this: > > > > > + hugetlb_cgroup_uncharge_folio_rsvd( > > > > + hstate_index(h), pages_per_huge_page(h), > > > > + folio); > > contains now-less-needed 80-column party tricks. Ah, that's a nice catch as well. I'll be sure to include it as well! My bigger motivation for sending a v2 was mostly to separate out the fix from the cleanup patches. Since they are separate now, I'll send out a v2 for 2/3 and 3/3, while I wait for reviews for 1/3 and send out a v2 for that one at a later point. For what it's worth, I don't think the fix is super urgent : -) Thanks again, Andrew! Joshua
© 2016 - 2026 Red Hat, Inc.