From nobody Tue Feb 10 01:59:36 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 26075C77B60 for ; Sat, 25 Mar 2023 06:56:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231920AbjCYG4k (ORCPT ); Sat, 25 Mar 2023 02:56:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:43206 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231867AbjCYG4Y (ORCPT ); Sat, 25 Mar 2023 02:56:24 -0400 Received: from szxga08-in.huawei.com (szxga08-in.huawei.com [45.249.212.255]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6850717CE7 for ; Fri, 24 Mar 2023 23:56:22 -0700 (PDT) Received: from kwepemm600020.china.huawei.com (unknown [172.30.72.57]) by szxga08-in.huawei.com (SkyGuard) with ESMTP id 4Pk8sx0rzjz17NfJ; Sat, 25 Mar 2023 14:53:09 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by kwepemm600020.china.huawei.com (7.193.23.147) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.21; Sat, 25 Mar 2023 14:56:18 +0800 From: Peng Zhang To: , , , CC: , , , , , , ZhangPeng Subject: [PATCH v3 5/6] mm: convert copy_user_huge_page() to copy_user_folio() Date: Sat, 25 Mar 2023 14:56:07 +0800 Message-ID: <20230325065608.601391-6-zhangpeng362@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230325065608.601391-1-zhangpeng362@huawei.com> References: <20230325065608.601391-1-zhangpeng362@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To kwepemm600020.china.huawei.com (7.193.23.147) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: ZhangPeng Replace copy_user_huge_page() with copy_user_folio(). copy_user_folio() does the same as copy_user_huge_page(), but takes in folios instead of pages. Convert copy_user_gigantic_page() to take in folios. Signed-off-by: ZhangPeng --- include/linux/mm.h | 8 ++++---- mm/hugetlb.c | 12 ++++++------ mm/memory.c | 28 ++++++++++++++-------------- 3 files changed, 24 insertions(+), 24 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 69dfadee23e8..6a787fe66ea1 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -3542,10 +3542,10 @@ extern const struct attribute_group memory_failure_= attr_group; extern void clear_huge_page(struct page *page, unsigned long addr_hint, unsigned int pages_per_huge_page); -extern void copy_user_huge_page(struct page *dst, struct page *src, - unsigned long addr_hint, - struct vm_area_struct *vma, - unsigned int pages_per_huge_page); +void copy_user_folio(struct folio *dst, struct folio *src, + unsigned long addr_hint, + struct vm_area_struct *vma, + unsigned int pages_per_huge_page); long copy_folio_from_user(struct folio *dst_folio, const void __user *usr_src, bool allow_pagefault); diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 1cfd20e5fe8b..85657f9007ee 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -5093,8 +5093,8 @@ int copy_hugetlb_page_range(struct mm_struct *dst, st= ruct mm_struct *src, ret =3D PTR_ERR(new_folio); break; } - copy_user_huge_page(&new_folio->page, ptepage, addr, dst_vma, - npages); + copy_user_folio(new_folio, page_folio(ptepage), addr, dst_vma, + npages); put_page(ptepage); =20 /* Install the new hugetlb folio if src pte stable */ @@ -5602,8 +5602,8 @@ static vm_fault_t hugetlb_wp(struct mm_struct *mm, st= ruct vm_area_struct *vma, goto out_release_all; } =20 - copy_user_huge_page(&new_folio->page, old_page, address, vma, - pages_per_huge_page(h)); + copy_user_folio(new_folio, page_folio(old_page), address, vma, + pages_per_huge_page(h)); __folio_mark_uptodate(new_folio); =20 mmu_notifier_range_init(&range, MMU_NOTIFY_CLEAR, 0, mm, haddr, @@ -6244,8 +6244,8 @@ int hugetlb_mcopy_atomic_pte(struct mm_struct *dst_mm, *foliop =3D NULL; goto out; } - copy_user_huge_page(&folio->page, &(*foliop)->page, dst_addr, dst_vma, - pages_per_huge_page(h)); + copy_user_folio(folio, *foliop, dst_addr, dst_vma, + pages_per_huge_page(h)); folio_put(*foliop); *foliop =3D NULL; } diff --git a/mm/memory.c b/mm/memory.c index faf79742e0b6..4752f0e829b6 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -5716,21 +5716,21 @@ void clear_huge_page(struct page *page, process_huge_page(addr_hint, pages_per_huge_page, clear_subpage, page); } =20 -static void copy_user_gigantic_page(struct page *dst, struct page *src, - unsigned long addr, - struct vm_area_struct *vma, - unsigned int pages_per_huge_page) +static void copy_user_gigantic_page(struct folio *dst, struct folio *src, + unsigned long addr, + struct vm_area_struct *vma, + unsigned int pages_per_huge_page) { int i; - struct page *dst_base =3D dst; - struct page *src_base =3D src; + struct page *dst_page; + struct page *src_page; =20 for (i =3D 0; i < pages_per_huge_page; i++) { - dst =3D nth_page(dst_base, i); - src =3D nth_page(src_base, i); + dst_page =3D folio_page(dst, i); + src_page =3D folio_page(src, i); =20 cond_resched(); - copy_user_highpage(dst, src, addr + i*PAGE_SIZE, vma); + copy_user_highpage(dst_page, src_page, addr + i*PAGE_SIZE, vma); } } =20 @@ -5748,15 +5748,15 @@ static void copy_subpage(unsigned long addr, int id= x, void *arg) addr, copy_arg->vma); } =20 -void copy_user_huge_page(struct page *dst, struct page *src, - unsigned long addr_hint, struct vm_area_struct *vma, - unsigned int pages_per_huge_page) +void copy_user_folio(struct folio *dst, struct folio *src, + unsigned long addr_hint, struct vm_area_struct *vma, + unsigned int pages_per_huge_page) { unsigned long addr =3D addr_hint & ~(((unsigned long)pages_per_huge_page << PAGE_SHIFT) - 1); struct copy_subpage_arg arg =3D { - .dst =3D dst, - .src =3D src, + .dst =3D &dst->page, + .src =3D &src->page, .vma =3D vma, }; =20 --=20 2.25.1