From: Fan Ni <fan.ni@samsung.com>
The function unmap_hugepage_range() has two kinds of users:
1) unmap_ref_private(), which passes in the head page of a folio. Since
unmap_ref_private() already takes folio and there are no other uses
of the folio struct in the function, it is natural for
unmap_hugepage_range() to take folio also.
2) All other uses, which pass in NULL pointer.
In both cases, we can pass in folio. Refactor unmap_hugepage_range() to
take folio.
Signed-off-by: Fan Ni <fan.ni@samsung.com>
Reviewed-by: Muchun Song <muchun.song@linux.dev>
Reviewed-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
---
include/linux/hugetlb.h | 4 ++--
mm/hugetlb.c | 7 ++++---
2 files changed, 6 insertions(+), 5 deletions(-)
diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index a57bed83c657..83d85cbb4284 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -128,8 +128,8 @@ int move_hugetlb_page_tables(struct vm_area_struct *vma,
int copy_hugetlb_page_range(struct mm_struct *, struct mm_struct *,
struct vm_area_struct *, struct vm_area_struct *);
void unmap_hugepage_range(struct vm_area_struct *,
- unsigned long, unsigned long, struct page *,
- zap_flags_t);
+ unsigned long start, unsigned long end,
+ struct folio *, zap_flags_t);
void __unmap_hugepage_range(struct mmu_gather *tlb,
struct vm_area_struct *vma,
unsigned long start, unsigned long end,
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index b1268e7ca1f6..7601e3d344bc 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -6014,7 +6014,7 @@ void __hugetlb_zap_end(struct vm_area_struct *vma,
}
void unmap_hugepage_range(struct vm_area_struct *vma, unsigned long start,
- unsigned long end, struct page *ref_page,
+ unsigned long end, struct folio *folio,
zap_flags_t zap_flags)
{
struct mmu_notifier_range range;
@@ -6026,7 +6026,8 @@ void unmap_hugepage_range(struct vm_area_struct *vma, unsigned long start,
mmu_notifier_invalidate_range_start(&range);
tlb_gather_mmu(&tlb, vma->vm_mm);
- __unmap_hugepage_range(&tlb, vma, start, end, ref_page, zap_flags);
+ __unmap_hugepage_range(&tlb, vma, start, end,
+ &folio->page, zap_flags);
mmu_notifier_invalidate_range_end(&range);
tlb_finish_mmu(&tlb);
@@ -6084,7 +6085,7 @@ static void unmap_ref_private(struct mm_struct *mm, struct vm_area_struct *vma,
if (!is_vma_resv_set(iter_vma, HPAGE_RESV_OWNER))
unmap_hugepage_range(iter_vma, address,
address + huge_page_size(h),
- &folio->page, 0);
+ folio, 0);
}
i_mmap_unlock_write(mapping);
}
--
2.47.2
On Mon, Apr 28, 2025 at 10:11:45AM -0700, nifan.cxl@gmail.com wrote: > From: Fan Ni <fan.ni@samsung.com> > > The function unmap_hugepage_range() has two kinds of users: > 1) unmap_ref_private(), which passes in the head page of a folio. Since > unmap_ref_private() already takes folio and there are no other uses > of the folio struct in the function, it is natural for > unmap_hugepage_range() to take folio also. > 2) All other uses, which pass in NULL pointer. > > In both cases, we can pass in folio. Refactor unmap_hugepage_range() to > take folio. > > Signed-off-by: Fan Ni <fan.ni@samsung.com> > Reviewed-by: Muchun Song <muchun.song@linux.dev> > Reviewed-by: Sidhartha Kumar <sidhartha.kumar@oracle.com> Reviewed-by: Oscar Salvador <osalvador@suse.de> -- Oscar Salvador SUSE Labs
On 28.04.25 19:11, nifan.cxl@gmail.com wrote: > From: Fan Ni <fan.ni@samsung.com> > > The function unmap_hugepage_range() has two kinds of users: > 1) unmap_ref_private(), which passes in the head page of a folio. Since > unmap_ref_private() already takes folio and there are no other uses > of the folio struct in the function, it is natural for > unmap_hugepage_range() to take folio also. > 2) All other uses, which pass in NULL pointer. > > In both cases, we can pass in folio. Refactor unmap_hugepage_range() to > take folio. > > Signed-off-by: Fan Ni <fan.ni@samsung.com> > Reviewed-by: Muchun Song <muchun.song@linux.dev> > Reviewed-by: Sidhartha Kumar <sidhartha.kumar@oracle.com> > --- Acked-by: David Hildenbrand <david@redhat.com> -- Cheers, David / dhildenb
© 2016 - 2025 Red Hat, Inc.