From nobody Mon Feb 9 10:27:37 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id B6917C7EE2A for ; Fri, 2 Jun 2023 01:54:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233404AbjFBBye (ORCPT ); Thu, 1 Jun 2023 21:54:34 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38962 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233197AbjFBBy3 (ORCPT ); Thu, 1 Jun 2023 21:54:29 -0400 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id A686FC0 for ; Thu, 1 Jun 2023 18:54:27 -0700 (PDT) Received: from kwepemm600020.china.huawei.com (unknown [172.30.72.57]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4QXQwn1HnpztQb9; Fri, 2 Jun 2023 09:52:09 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by kwepemm600020.china.huawei.com (7.193.23.147) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Fri, 2 Jun 2023 09:54:24 +0800 From: Peng Zhang To: , , , , CC: , , , , , ZhangPeng Subject: [PATCH 2/2] mm/hugetlb: Use a folio in hugetlb_wp() Date: Fri, 2 Jun 2023 09:54:08 +0800 Message-ID: <20230602015408.376149-3-zhangpeng362@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230602015408.376149-1-zhangpeng362@huawei.com> References: <20230602015408.376149-1-zhangpeng362@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To kwepemm600020.china.huawei.com (7.193.23.147) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: ZhangPeng We can replace nine implict calls to compound_head() with one by using old_folio. However, we still need to keep old_page because we need to know which page in the folio we are copying. Suggested-by: Matthew Wilcox (Oracle) Signed-off-by: ZhangPeng Reviewed-by Sidhartha Kumar Reviewed-by: Muchun Song --- mm/hugetlb.c | 22 ++++++++++++---------- 1 file changed, 12 insertions(+), 10 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 0b774dd3d57b..f0ab6e8adf6f 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -5543,6 +5543,7 @@ static vm_fault_t hugetlb_wp(struct mm_struct *mm, st= ruct vm_area_struct *vma, pte_t pte =3D huge_ptep_get(ptep); struct hstate *h =3D hstate_vma(vma); struct page *old_page; + struct folio *old_folio; struct folio *new_folio; int outside_reserve =3D 0; vm_fault_t ret =3D 0; @@ -5574,6 +5575,7 @@ static vm_fault_t hugetlb_wp(struct mm_struct *mm, st= ruct vm_area_struct *vma, } =20 old_page =3D pte_page(pte); + old_folio =3D page_folio(old_page); =20 delayacct_wpcopy_start(); =20 @@ -5582,7 +5584,7 @@ static vm_fault_t hugetlb_wp(struct mm_struct *mm, st= ruct vm_area_struct *vma, * If no-one else is actually using this page, we're the exclusive * owner and can reuse this page. */ - if (page_mapcount(old_page) =3D=3D 1 && PageAnon(old_page)) { + if (page_mapcount(old_page) =3D=3D 1 && folio_test_anon(old_folio)) { if (!PageAnonExclusive(old_page)) page_move_anon_rmap(old_page, vma); if (likely(!unshare)) @@ -5591,8 +5593,8 @@ static vm_fault_t hugetlb_wp(struct mm_struct *mm, st= ruct vm_area_struct *vma, delayacct_wpcopy_end(); return 0; } - VM_BUG_ON_PAGE(PageAnon(old_page) && PageAnonExclusive(old_page), - old_page); + VM_BUG_ON_PAGE(folio_test_anon(old_folio) && + PageAnonExclusive(old_page), old_page); =20 /* * If the process that created a MAP_PRIVATE mapping is about to @@ -5604,10 +5606,10 @@ static vm_fault_t hugetlb_wp(struct mm_struct *mm, = struct vm_area_struct *vma, * of the full address range. */ if (is_vma_resv_set(vma, HPAGE_RESV_OWNER) && - page_folio(old_page) !=3D pagecache_folio) + old_folio !=3D pagecache_folio) outside_reserve =3D 1; =20 - get_page(old_page); + folio_get(old_folio); =20 /* * Drop page table lock as buddy allocator may be called. It will @@ -5629,7 +5631,7 @@ static vm_fault_t hugetlb_wp(struct mm_struct *mm, st= ruct vm_area_struct *vma, pgoff_t idx; u32 hash; =20 - put_page(old_page); + folio_put(old_folio); /* * Drop hugetlb_fault_mutex and vma_lock before * unmapping. unmapping needs to hold vma_lock @@ -5674,7 +5676,7 @@ static vm_fault_t hugetlb_wp(struct mm_struct *mm, st= ruct vm_area_struct *vma, goto out_release_all; } =20 - if (copy_user_large_folio(new_folio, page_folio(old_page), address, vma))= { + if (copy_user_large_folio(new_folio, old_folio, address, vma)) { ret =3D VM_FAULT_HWPOISON_LARGE; goto out_release_all; } @@ -5703,7 +5705,7 @@ static vm_fault_t hugetlb_wp(struct mm_struct *mm, st= ruct vm_area_struct *vma, set_huge_pte_at(mm, haddr, ptep, newpte); folio_set_hugetlb_migratable(new_folio); /* Make the old page be freed below */ - new_folio =3D page_folio(old_page); + new_folio =3D old_folio; } spin_unlock(ptl); mmu_notifier_invalidate_range_end(&range); @@ -5712,11 +5714,11 @@ static vm_fault_t hugetlb_wp(struct mm_struct *mm, = struct vm_area_struct *vma, * No restore in case of successful pagetable update (Break COW or * unshare) */ - if (new_folio !=3D page_folio(old_page)) + if (new_folio !=3D old_folio) restore_reserve_on_error(h, vma, haddr, new_folio); folio_put(new_folio); out_release_old: - put_page(old_page); + folio_put(old_folio); =20 spin_lock(ptl); /* Caller expects lock to be held */ =20 --=20 2.25.1