From nobody Mon Feb 9 01:22:04 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9FA41C7EE24 for ; Tue, 6 Jun 2023 06:21:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234871AbjFFGVV (ORCPT ); Tue, 6 Jun 2023 02:21:21 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:34184 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S235033AbjFFGVE (ORCPT ); Tue, 6 Jun 2023 02:21:04 -0400 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EB2FD10D7 for ; Mon, 5 Jun 2023 23:20:46 -0700 (PDT) Received: from kwepemm600020.china.huawei.com (unknown [172.30.72.55]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4Qb0bK5Qh6zqTRy; Tue, 6 Jun 2023 14:15:57 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by kwepemm600020.china.huawei.com (7.193.23.147) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Tue, 6 Jun 2023 14:20:43 +0800 From: Peng Zhang To: , , , , , , CC: , , , ZhangPeng , Muchun Song Subject: [PATCH v2 1/3] mm/hugetlb: Use a folio in copy_hugetlb_page_range() Date: Tue, 6 Jun 2023 14:20:11 +0800 Message-ID: <20230606062013.2947002-2-zhangpeng362@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230606062013.2947002-1-zhangpeng362@huawei.com> References: <20230606062013.2947002-1-zhangpeng362@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To kwepemm600020.china.huawei.com (7.193.23.147) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: ZhangPeng We can replace five implict calls to compound_head() with one by using pte_folio. The page we get back is always a head page, so we just convert ptepage to pte_folio. Suggested-by: Matthew Wilcox (Oracle) Signed-off-by: ZhangPeng Reviewed-by: Muchun Song Reviewed-by: Sidhartha Kumar Reviewed-by: Matthew Wilcox (Oracle) --- mm/hugetlb.c | 22 +++++++++++----------- 1 file changed, 11 insertions(+), 11 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index ea24718db4af..d6f6d19958a5 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -5016,7 +5016,7 @@ int copy_hugetlb_page_range(struct mm_struct *dst, st= ruct mm_struct *src, struct vm_area_struct *src_vma) { pte_t *src_pte, *dst_pte, entry; - struct page *ptepage; + struct folio *pte_folio; unsigned long addr; bool cow =3D is_cow_mapping(src_vma->vm_flags); struct hstate *h =3D hstate_vma(src_vma); @@ -5115,8 +5115,8 @@ int copy_hugetlb_page_range(struct mm_struct *dst, st= ruct mm_struct *src, set_huge_pte_at(dst, addr, dst_pte, entry); } else { entry =3D huge_ptep_get(src_pte); - ptepage =3D pte_page(entry); - get_page(ptepage); + pte_folio =3D page_folio(pte_page(entry)); + folio_get(pte_folio); =20 /* * Failing to duplicate the anon rmap is a rare case @@ -5128,10 +5128,10 @@ int copy_hugetlb_page_range(struct mm_struct *dst, = struct mm_struct *src, * need to be without the pgtable locks since we could * sleep during the process. */ - if (!PageAnon(ptepage)) { - page_dup_file_rmap(ptepage, true); - } else if (page_try_dup_anon_rmap(ptepage, true, - src_vma)) { + if (!folio_test_anon(pte_folio)) { + page_dup_file_rmap(&pte_folio->page, true); + } else if (page_try_dup_anon_rmap(&pte_folio->page, + true, src_vma)) { pte_t src_pte_old =3D entry; struct folio *new_folio; =20 @@ -5140,14 +5140,14 @@ int copy_hugetlb_page_range(struct mm_struct *dst, = struct mm_struct *src, /* Do not use reserve as it's private owned */ new_folio =3D alloc_hugetlb_folio(dst_vma, addr, 1); if (IS_ERR(new_folio)) { - put_page(ptepage); + folio_put(pte_folio); ret =3D PTR_ERR(new_folio); break; } ret =3D copy_user_large_folio(new_folio, - page_folio(ptepage), - addr, dst_vma); - put_page(ptepage); + pte_folio, + addr, dst_vma); + folio_put(pte_folio); if (ret) { folio_put(new_folio); break; --=20 2.25.1 From nobody Mon Feb 9 01:22:04 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BEF7CC7EE24 for ; Tue, 6 Jun 2023 06:21:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235307AbjFFGVh (ORCPT ); Tue, 6 Jun 2023 02:21:37 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33450 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234083AbjFFGVE (ORCPT ); Tue, 6 Jun 2023 02:21:04 -0400 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6A85510DA for ; Mon, 5 Jun 2023 23:20:47 -0700 (PDT) Received: from kwepemm600020.china.huawei.com (unknown [172.30.72.56]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4Qb0dM5ppgzLqV2; Tue, 6 Jun 2023 14:17:43 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by kwepemm600020.china.huawei.com (7.193.23.147) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Tue, 6 Jun 2023 14:20:44 +0800 From: Peng Zhang To: , , , , , , CC: , , , ZhangPeng , Muchun Song Subject: [PATCH v2 2/3] mm/hugetlb: Use a folio in hugetlb_wp() Date: Tue, 6 Jun 2023 14:20:12 +0800 Message-ID: <20230606062013.2947002-3-zhangpeng362@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230606062013.2947002-1-zhangpeng362@huawei.com> References: <20230606062013.2947002-1-zhangpeng362@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To kwepemm600020.china.huawei.com (7.193.23.147) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: ZhangPeng We can replace nine implict calls to compound_head() with one by using old_folio. The page we get back is always a head page, so we just convert old_page to old_folio. Suggested-by: Matthew Wilcox (Oracle) Signed-off-by: ZhangPeng Reviewed-by: Muchun Song Reviewed-by: Sidhartha Kumar Reviewed-by: Matthew Wilcox (Oracle) --- mm/hugetlb.c | 32 ++++++++++++++++---------------- 1 file changed, 16 insertions(+), 16 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index d6f6d19958a5..e58f8001fd92 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -5540,7 +5540,7 @@ static vm_fault_t hugetlb_wp(struct mm_struct *mm, st= ruct vm_area_struct *vma, const bool unshare =3D flags & FAULT_FLAG_UNSHARE; pte_t pte =3D huge_ptep_get(ptep); struct hstate *h =3D hstate_vma(vma); - struct page *old_page; + struct folio *old_folio; struct folio *new_folio; int outside_reserve =3D 0; vm_fault_t ret =3D 0; @@ -5571,7 +5571,7 @@ static vm_fault_t hugetlb_wp(struct mm_struct *mm, st= ruct vm_area_struct *vma, return 0; } =20 - old_page =3D pte_page(pte); + old_folio =3D page_folio(pte_page(pte)); =20 delayacct_wpcopy_start(); =20 @@ -5580,17 +5580,17 @@ static vm_fault_t hugetlb_wp(struct mm_struct *mm, = struct vm_area_struct *vma, * If no-one else is actually using this page, we're the exclusive * owner and can reuse this page. */ - if (page_mapcount(old_page) =3D=3D 1 && PageAnon(old_page)) { - if (!PageAnonExclusive(old_page)) - page_move_anon_rmap(old_page, vma); + if (folio_mapcount(old_folio) =3D=3D 1 && folio_test_anon(old_folio)) { + if (!PageAnonExclusive(&old_folio->page)) + page_move_anon_rmap(&old_folio->page, vma); if (likely(!unshare)) set_huge_ptep_writable(vma, haddr, ptep); =20 delayacct_wpcopy_end(); return 0; } - VM_BUG_ON_PAGE(PageAnon(old_page) && PageAnonExclusive(old_page), - old_page); + VM_BUG_ON_PAGE(folio_test_anon(old_folio) && + PageAnonExclusive(&old_folio->page), &old_folio->page); =20 /* * If the process that created a MAP_PRIVATE mapping is about to @@ -5602,10 +5602,10 @@ static vm_fault_t hugetlb_wp(struct mm_struct *mm, = struct vm_area_struct *vma, * of the full address range. */ if (is_vma_resv_set(vma, HPAGE_RESV_OWNER) && - page_folio(old_page) !=3D pagecache_folio) + old_folio !=3D pagecache_folio) outside_reserve =3D 1; =20 - get_page(old_page); + folio_get(old_folio); =20 /* * Drop page table lock as buddy allocator may be called. It will @@ -5627,7 +5627,7 @@ static vm_fault_t hugetlb_wp(struct mm_struct *mm, st= ruct vm_area_struct *vma, pgoff_t idx; u32 hash; =20 - put_page(old_page); + folio_put(old_folio); /* * Drop hugetlb_fault_mutex and vma_lock before * unmapping. unmapping needs to hold vma_lock @@ -5642,7 +5642,7 @@ static vm_fault_t hugetlb_wp(struct mm_struct *mm, st= ruct vm_area_struct *vma, hugetlb_vma_unlock_read(vma); mutex_unlock(&hugetlb_fault_mutex_table[hash]); =20 - unmap_ref_private(mm, vma, old_page, haddr); + unmap_ref_private(mm, vma, &old_folio->page, haddr); =20 mutex_lock(&hugetlb_fault_mutex_table[hash]); hugetlb_vma_lock_read(vma); @@ -5672,7 +5672,7 @@ static vm_fault_t hugetlb_wp(struct mm_struct *mm, st= ruct vm_area_struct *vma, goto out_release_all; } =20 - if (copy_user_large_folio(new_folio, page_folio(old_page), address, vma))= { + if (copy_user_large_folio(new_folio, old_folio, address, vma)) { ret =3D VM_FAULT_HWPOISON_LARGE; goto out_release_all; } @@ -5694,14 +5694,14 @@ static vm_fault_t hugetlb_wp(struct mm_struct *mm, = struct vm_area_struct *vma, /* Break COW or unshare */ huge_ptep_clear_flush(vma, haddr, ptep); mmu_notifier_invalidate_range(mm, range.start, range.end); - page_remove_rmap(old_page, vma, true); + page_remove_rmap(&old_folio->page, vma, true); hugepage_add_new_anon_rmap(new_folio, vma, haddr); if (huge_pte_uffd_wp(pte)) newpte =3D huge_pte_mkuffd_wp(newpte); set_huge_pte_at(mm, haddr, ptep, newpte); folio_set_hugetlb_migratable(new_folio); /* Make the old page be freed below */ - new_folio =3D page_folio(old_page); + new_folio =3D old_folio; } spin_unlock(ptl); mmu_notifier_invalidate_range_end(&range); @@ -5710,11 +5710,11 @@ static vm_fault_t hugetlb_wp(struct mm_struct *mm, = struct vm_area_struct *vma, * No restore in case of successful pagetable update (Break COW or * unshare) */ - if (new_folio !=3D page_folio(old_page)) + if (new_folio !=3D old_folio) restore_reserve_on_error(h, vma, haddr, new_folio); folio_put(new_folio); out_release_old: - put_page(old_page); + folio_put(old_folio); =20 spin_lock(ptl); /* Caller expects lock to be held */ =20 --=20 2.25.1 From nobody Mon Feb 9 01:22:04 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6F913C7EE2A for ; Tue, 6 Jun 2023 06:21:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S234319AbjFFGVc (ORCPT ); Tue, 6 Jun 2023 02:21:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:32882 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234154AbjFFGVE (ORCPT ); Tue, 6 Jun 2023 02:21:04 -0400 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 580C710DE for ; Mon, 5 Jun 2023 23:20:48 -0700 (PDT) Received: from kwepemm600020.china.huawei.com (unknown [172.30.72.53]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4Qb0dN4mG3zLqVB; Tue, 6 Jun 2023 14:17:44 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by kwepemm600020.china.huawei.com (7.193.23.147) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Tue, 6 Jun 2023 14:20:45 +0800 From: Peng Zhang To: , , , , , , CC: , , , ZhangPeng Subject: [PATCH v2 3/3] mm/hugetlb: Use a folio in hugetlb_fault() Date: Tue, 6 Jun 2023 14:20:13 +0800 Message-ID: <20230606062013.2947002-4-zhangpeng362@huawei.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230606062013.2947002-1-zhangpeng362@huawei.com> References: <20230606062013.2947002-1-zhangpeng362@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To kwepemm600020.china.huawei.com (7.193.23.147) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" From: ZhangPeng We can replace seven implicit calls to compound_head() with one by using folio. Signed-off-by: ZhangPeng Reviewed-by Sidhartha Kumar Reviewed-by: Matthew Wilcox (Oracle) Reviewed-by: Muchun Song --- mm/hugetlb.c | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/mm/hugetlb.c b/mm/hugetlb.c index e58f8001fd92..e34329e25abe 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -6062,7 +6062,7 @@ vm_fault_t hugetlb_fault(struct mm_struct *mm, struct= vm_area_struct *vma, vm_fault_t ret; u32 hash; pgoff_t idx; - struct page *page =3D NULL; + struct folio *folio =3D NULL; struct folio *pagecache_folio =3D NULL; struct hstate *h =3D hstate_vma(vma); struct address_space *mapping; @@ -6181,14 +6181,14 @@ vm_fault_t hugetlb_fault(struct mm_struct *mm, stru= ct vm_area_struct *vma, * pagecache_folio, so here we need take the former one * when page !=3D pagecache_folio or !pagecache_folio. */ - page =3D pte_page(entry); - if (page_folio(page) !=3D pagecache_folio) - if (!trylock_page(page)) { + folio =3D page_folio(pte_page(entry)); + if (folio !=3D pagecache_folio) + if (!folio_trylock(folio)) { need_wait_lock =3D 1; goto out_ptl; } =20 - get_page(page); + folio_get(folio); =20 if (flags & (FAULT_FLAG_WRITE|FAULT_FLAG_UNSHARE)) { if (!huge_pte_write(entry)) { @@ -6204,9 +6204,9 @@ vm_fault_t hugetlb_fault(struct mm_struct *mm, struct= vm_area_struct *vma, flags & FAULT_FLAG_WRITE)) update_mmu_cache(vma, haddr, ptep); out_put_page: - if (page_folio(page) !=3D pagecache_folio) - unlock_page(page); - put_page(page); + if (folio !=3D pagecache_folio) + folio_unlock(folio); + folio_put(folio); out_ptl: spin_unlock(ptl); =20 @@ -6225,7 +6225,7 @@ vm_fault_t hugetlb_fault(struct mm_struct *mm, struct= vm_area_struct *vma, * here without taking refcount. */ if (need_wait_lock) - wait_on_page_locked(page); + folio_wait_locked(folio); return ret; } =20 --=20 2.25.1