From nobody Mon Sep 15 12:56:21 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 27B9FC54EBD for ; Thu, 12 Jan 2023 08:19:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S236337AbjALIT3 (ORCPT ); Thu, 12 Jan 2023 03:19:29 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52522 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S239601AbjALIRx (ORCPT ); Thu, 12 Jan 2023 03:17:53 -0500 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 724CF32183 for ; Thu, 12 Jan 2023 00:17:35 -0800 (PST) Received: from dggpemm500001.china.huawei.com (unknown [172.30.72.55]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4Nsy304mYVzqV9c; Thu, 12 Jan 2023 16:12:44 +0800 (CST) Received: from localhost.localdomain.localdomain (10.175.113.25) by dggpemm500001.china.huawei.com (7.185.36.107) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2375.34; Thu, 12 Jan 2023 16:17:31 +0800 From: Kefeng Wang To: , , CC: , Kefeng Wang Subject: [PATCH -next 1/7] mm: huge_memory: make __do_huge_pmd_anonymous_page() to take a folio Date: Thu, 12 Jan 2023 16:30:00 +0800 Message-ID: <20230112083006.163393-2-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20230112083006.163393-1-wangkefeng.wang@huawei.com> References: <20230112083006.163393-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [10.175.113.25] X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To dggpemm500001.china.huawei.com (7.185.36.107) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Let's __do_huge_pmd_anonymous_page() take a folio and convert related functions to use folios. Signed-off-by: Kefeng Wang --- mm/huge_memory.c | 23 ++++++++++++----------- 1 file changed, 12 insertions(+), 11 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index c13b1f67d14e..cb23b24e2eb8 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -650,22 +650,23 @@ unsigned long thp_get_unmapped_area(struct file *filp= , unsigned long addr, EXPORT_SYMBOL_GPL(thp_get_unmapped_area); =20 static vm_fault_t __do_huge_pmd_anonymous_page(struct vm_fault *vmf, - struct page *page, gfp_t gfp) + struct folio *folio, gfp_t gfp) { struct vm_area_struct *vma =3D vmf->vma; + struct page *page =3D &folio->page; pgtable_t pgtable; unsigned long haddr =3D vmf->address & HPAGE_PMD_MASK; vm_fault_t ret =3D 0; =20 - VM_BUG_ON_PAGE(!PageCompound(page), page); + VM_BUG_ON_FOLIO(!folio_test_large(folio), folio); =20 - if (mem_cgroup_charge(page_folio(page), vma->vm_mm, gfp)) { - put_page(page); + if (mem_cgroup_charge(folio, vma->vm_mm, gfp)) { + folio_put(folio); count_vm_event(THP_FAULT_FALLBACK); count_vm_event(THP_FAULT_FALLBACK_CHARGE); return VM_FAULT_FALLBACK; } - cgroup_throttle_swaprate(page, gfp); + folio_throttle_swaprate(folio, gfp); =20 pgtable =3D pte_alloc_one(vma->vm_mm); if (unlikely(!pgtable)) { @@ -675,11 +676,11 @@ static vm_fault_t __do_huge_pmd_anonymous_page(struct= vm_fault *vmf, =20 clear_huge_page(page, vmf->address, HPAGE_PMD_NR); /* - * The memory barrier inside __SetPageUptodate makes sure that + * The memory barrier inside __folio_mark_uptodate makes sure that * clear_huge_page writes become visible before the set_pmd_at() * write. */ - __SetPageUptodate(page); + __folio_mark_uptodate(folio); =20 vmf->ptl =3D pmd_lock(vma->vm_mm, vmf->pmd); if (unlikely(!pmd_none(*vmf->pmd))) { @@ -694,7 +695,7 @@ static vm_fault_t __do_huge_pmd_anonymous_page(struct v= m_fault *vmf, /* Deliver the page fault to userland */ if (userfaultfd_missing(vma)) { spin_unlock(vmf->ptl); - put_page(page); + folio_put(folio); pte_free(vma->vm_mm, pgtable); ret =3D handle_userfault(vmf, VM_UFFD_MISSING); VM_BUG_ON(ret & VM_FAULT_FALLBACK); @@ -704,7 +705,7 @@ static vm_fault_t __do_huge_pmd_anonymous_page(struct v= m_fault *vmf, entry =3D mk_huge_pmd(page, vma->vm_page_prot); entry =3D maybe_pmd_mkwrite(pmd_mkdirty(entry), vma); page_add_new_anon_rmap(page, vma, haddr); - lru_cache_add_inactive_or_unevictable(page, vma); + folio_add_lru_vma(folio, vma); pgtable_trans_huge_deposit(vma->vm_mm, vmf->pmd, pgtable); set_pmd_at(vma->vm_mm, haddr, vmf->pmd, entry); update_mmu_cache_pmd(vma, vmf->address, vmf->pmd); @@ -721,7 +722,7 @@ static vm_fault_t __do_huge_pmd_anonymous_page(struct v= m_fault *vmf, release: if (pgtable) pte_free(vma->vm_mm, pgtable); - put_page(page); + folio_put(folio); return ret; =20 } @@ -834,7 +835,7 @@ vm_fault_t do_huge_pmd_anonymous_page(struct vm_fault *= vmf) count_vm_event(THP_FAULT_FALLBACK); return VM_FAULT_FALLBACK; } - return __do_huge_pmd_anonymous_page(vmf, &folio->page, gfp); + return __do_huge_pmd_anonymous_page(vmf, folio, gfp); } =20 static void insert_pfn_pmd(struct vm_area_struct *vma, unsigned long addr, --=20 2.35.3