From: Fan Ni <fan.ni@samsung.com>
The function unmap_ref_private() has only user, which passes in
&folio->page. Let it take folio directly.
Signed-off-by: Fan Ni <fan.ni@samsung.com>
Reviewed-by: Muchun Song <muchun.song@linux.dev>
Reviewed-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
---
mm/hugetlb.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index e287d8050b40..b1268e7ca1f6 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -6039,7 +6039,7 @@ void unmap_hugepage_range(struct vm_area_struct *vma, unsigned long start,
* same region.
*/
static void unmap_ref_private(struct mm_struct *mm, struct vm_area_struct *vma,
- struct page *page, unsigned long address)
+ struct folio *folio, unsigned long address)
{
struct hstate *h = hstate_vma(vma);
struct vm_area_struct *iter_vma;
@@ -6083,7 +6083,8 @@ static void unmap_ref_private(struct mm_struct *mm, struct vm_area_struct *vma,
*/
if (!is_vma_resv_set(iter_vma, HPAGE_RESV_OWNER))
unmap_hugepage_range(iter_vma, address,
- address + huge_page_size(h), page, 0);
+ address + huge_page_size(h),
+ &folio->page, 0);
}
i_mmap_unlock_write(mapping);
}
@@ -6206,8 +6207,7 @@ static vm_fault_t hugetlb_wp(struct folio *pagecache_folio,
hugetlb_vma_unlock_read(vma);
mutex_unlock(&hugetlb_fault_mutex_table[hash]);
- unmap_ref_private(mm, vma, &old_folio->page,
- vmf->address);
+ unmap_ref_private(mm, vma, old_folio, vmf->address);
mutex_lock(&hugetlb_fault_mutex_table[hash]);
hugetlb_vma_lock_read(vma);
--
2.47.2
On Mon, Apr 28, 2025 at 10:11:44AM -0700, nifan.cxl@gmail.com wrote:
> From: Fan Ni <fan.ni@samsung.com>
>
> The function unmap_ref_private() has only user, which passes in
> &folio->page. Let it take folio directly.
>
> Signed-off-by: Fan Ni <fan.ni@samsung.com>
> Reviewed-by: Muchun Song <muchun.song@linux.dev>
> Reviewed-by: Sidhartha Kumar <sidhartha.kumar@oracle.com>
Reviewed-by: Oscar Salvador <osalvador@suse.de>
> ---
> mm/hugetlb.c | 8 ++++----
> 1 file changed, 4 insertions(+), 4 deletions(-)
>
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index e287d8050b40..b1268e7ca1f6 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -6039,7 +6039,7 @@ void unmap_hugepage_range(struct vm_area_struct *vma, unsigned long start,
> * same region.
> */
> static void unmap_ref_private(struct mm_struct *mm, struct vm_area_struct *vma,
> - struct page *page, unsigned long address)
> + struct folio *folio, unsigned long address)
> {
> struct hstate *h = hstate_vma(vma);
> struct vm_area_struct *iter_vma;
> @@ -6083,7 +6083,8 @@ static void unmap_ref_private(struct mm_struct *mm, struct vm_area_struct *vma,
> */
> if (!is_vma_resv_set(iter_vma, HPAGE_RESV_OWNER))
> unmap_hugepage_range(iter_vma, address,
> - address + huge_page_size(h), page, 0);
> + address + huge_page_size(h),
> + &folio->page, 0);
> }
> i_mmap_unlock_write(mapping);
> }
> @@ -6206,8 +6207,7 @@ static vm_fault_t hugetlb_wp(struct folio *pagecache_folio,
> hugetlb_vma_unlock_read(vma);
> mutex_unlock(&hugetlb_fault_mutex_table[hash]);
>
> - unmap_ref_private(mm, vma, &old_folio->page,
> - vmf->address);
> + unmap_ref_private(mm, vma, old_folio, vmf->address);
>
> mutex_lock(&hugetlb_fault_mutex_table[hash]);
> hugetlb_vma_lock_read(vma);
> --
> 2.47.2
>
>
--
Oscar Salvador
SUSE Labs
On Mon, Apr 28, 2025 at 10:11:44AM -0700, nifan.cxl@gmail.com wrote: > From: Fan Ni <fan.ni@samsung.com> > > The function unmap_ref_private() has only user, which passes in > &folio->page. Let it take folio directly. > > Signed-off-by: Fan Ni <fan.ni@samsung.com> > Reviewed-by: Muchun Song <muchun.song@linux.dev> > Reviewed-by: Sidhartha Kumar <sidhartha.kumar@oracle.com> Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org>
On 28.04.25 19:11, nifan.cxl@gmail.com wrote: > From: Fan Ni <fan.ni@samsung.com> > > The function unmap_ref_private() has only user, which passes in "only a single user" > &folio->page. Let it take folio directly. "the folio" Acked-by: David Hildenbrand <david@redhat.com> -- Cheers, David / dhildenb
© 2016 - 2025 Red Hat, Inc.