From nobody Tue Dec 16 07:12:36 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4D99DCD13D2 for ; Mon, 18 Sep 2023 10:33:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241322AbjIRKdk (ORCPT ); Mon, 18 Sep 2023 06:33:40 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38486 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241340AbjIRKdP (ORCPT ); Mon, 18 Sep 2023 06:33:15 -0400 Received: from szxga08-in.huawei.com (szxga08-in.huawei.com [45.249.212.255]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B970E12F for ; Mon, 18 Sep 2023 03:32:51 -0700 (PDT) Received: from dggpemm100001.china.huawei.com (unknown [172.30.72.53]) by szxga08-in.huawei.com (SkyGuard) with ESMTP id 4Rq1KL21nhz15NQY; Mon, 18 Sep 2023 18:30:46 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.31; Mon, 18 Sep 2023 18:32:49 +0800 From: Kefeng Wang To: Andrew Morton CC: , , , , , Zi Yan , Mike Kravetz , , Kefeng Wang Subject: [PATCH 6/6] mm: huge_memory: use a folio in do_huge_pmd_numa_page() Date: Mon, 18 Sep 2023 18:32:13 +0800 Message-ID: <20230918103213.4166210-7-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20230918103213.4166210-1-wangkefeng.wang@huawei.com> References: <20230918103213.4166210-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To dggpemm100001.china.huawei.com (7.185.36.93) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Use a folio in do_huge_pmd_numa_page(), reduce three page_folio() calls to one, no functional change intended. Signed-off-by: Kefeng Wang --- mm/huge_memory.c | 28 +++++++++++++--------------- 1 file changed, 13 insertions(+), 15 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 3e34a48fbdd8..5c015ca40fea 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -1517,9 +1517,9 @@ vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf) struct vm_area_struct *vma =3D vmf->vma; pmd_t oldpmd =3D vmf->orig_pmd; pmd_t pmd; - struct page *page; + struct folio *folio; unsigned long haddr =3D vmf->address & HPAGE_PMD_MASK; - int page_nid =3D NUMA_NO_NODE; + int nid =3D NUMA_NO_NODE; int target_nid, last_cpupid =3D (-1 & LAST_CPUPID_MASK); bool migrated =3D false, writable =3D false; int flags =3D 0; @@ -1541,36 +1541,35 @@ vm_fault_t do_huge_pmd_numa_page(struct vm_fault *v= mf) can_change_pmd_writable(vma, vmf->address, pmd)) writable =3D true; =20 - page =3D vm_normal_page_pmd(vma, haddr, pmd); - if (!page) + folio =3D vm_normal_pmd_folio(vma, haddr, pmd); + if (!folio) goto out_map; =20 /* See similar comment in do_numa_page for explanation */ if (!writable) flags |=3D TNF_NO_GROUP; =20 - page_nid =3D page_to_nid(page); + nid =3D folio_nid(folio); /* * For memory tiering mode, cpupid of slow memory page is used * to record page access time. So use default value. */ - if (node_is_toptier(page_nid)) - last_cpupid =3D page_cpupid_last(page); - target_nid =3D numa_migrate_prep(page_folio(page), vma, haddr, page_nid, - &flags); + if (node_is_toptier(nid)) + last_cpupid =3D page_cpupid_last(&folio->page); =20 + target_nid =3D numa_migrate_prep(folio, vma, haddr, nid, &flags); if (target_nid =3D=3D NUMA_NO_NODE) { - put_page(page); + folio_put(folio); goto out_map; } =20 spin_unlock(vmf->ptl); writable =3D false; =20 - migrated =3D migrate_misplaced_folio(page_folio(page), vma, target_nid); + migrated =3D migrate_misplaced_folio(folio, vma, target_nid); if (migrated) { flags |=3D TNF_MIGRATED; - page_nid =3D target_nid; + nid =3D target_nid; } else { flags |=3D TNF_MIGRATE_FAIL; vmf->ptl =3D pmd_lock(vma->vm_mm, vmf->pmd); @@ -1582,9 +1581,8 @@ vm_fault_t do_huge_pmd_numa_page(struct vm_fault *vmf) } =20 out: - if (page_nid !=3D NUMA_NO_NODE) - task_numa_fault(last_cpupid, page_nid, HPAGE_PMD_NR, - flags); + if (nid !=3D NUMA_NO_NODE) + task_numa_fault(last_cpupid, nid, HPAGE_PMD_NR, flags); =20 return 0; =20 --=20 2.27.0