From nobody Tue Dec 16 14:20:42 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E5B3FE7D0A4 for ; Thu, 21 Sep 2023 18:54:15 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230417AbjIUSyT (ORCPT ); Thu, 21 Sep 2023 14:54:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:48730 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230174AbjIUSxw (ORCPT ); Thu, 21 Sep 2023 14:53:52 -0400 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6A9DCD5123 for ; Thu, 21 Sep 2023 11:41:44 -0700 (PDT) Received: from dggpemm100001.china.huawei.com (unknown [172.30.72.54]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4RrnQt2BRGztSwP; Thu, 21 Sep 2023 15:41:42 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.31; Thu, 21 Sep 2023 15:45:57 +0800 From: Kefeng Wang To: Andrew Morton CC: , , , , , Zi Yan , Mike Kravetz , , Kefeng Wang Subject: [PATCH v2 3/6] mm: memory: use a folio in do_numa_page() Date: Thu, 21 Sep 2023 15:44:14 +0800 Message-ID: <20230921074417.24004-4-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20230921074417.24004-1-wangkefeng.wang@huawei.com> References: <20230921074417.24004-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To dggpemm100001.china.huawei.com (7.185.36.93) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Numa balancing only try to migrate non-compound page in do_numa_page(), use a folio in it to save several compound_head calls, note we use folio_estimated_sharers(), it is enough to check the folio sharers since only normal page is handled, if large folio numa balancing is supported, a precise folio sharers check would be used, no functional change intended. Signed-off-by: Kefeng Wang --- mm/memory.c | 34 +++++++++++++++++----------------- 1 file changed, 17 insertions(+), 17 deletions(-) diff --git a/mm/memory.c b/mm/memory.c index dbc7b67eca68..a05cfb6be36d 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4747,8 +4747,8 @@ int numa_migrate_prep(struct page *page, struct vm_ar= ea_struct *vma, static vm_fault_t do_numa_page(struct vm_fault *vmf) { struct vm_area_struct *vma =3D vmf->vma; - struct page *page =3D NULL; - int page_nid =3D NUMA_NO_NODE; + struct folio *folio =3D NULL; + int nid =3D NUMA_NO_NODE; bool writable =3D false; int last_cpupid; int target_nid; @@ -4779,12 +4779,12 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf) can_change_pte_writable(vma, vmf->address, pte)) writable =3D true; =20 - page =3D vm_normal_page(vma, vmf->address, pte); - if (!page || is_zone_device_page(page)) + folio =3D vm_normal_folio(vma, vmf->address, pte); + if (!folio || folio_is_zone_device(folio)) goto out_map; =20 /* TODO: handle PTE-mapped THP */ - if (PageCompound(page)) + if (folio_test_large(folio)) goto out_map; =20 /* @@ -4799,34 +4799,34 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf) flags |=3D TNF_NO_GROUP; =20 /* - * Flag if the page is shared between multiple address spaces. This + * Flag if the folio is shared between multiple address spaces. This * is later used when determining whether to group tasks together */ - if (page_mapcount(page) > 1 && (vma->vm_flags & VM_SHARED)) + if (folio_estimated_sharers(folio) > 1 && (vma->vm_flags & VM_SHARED)) flags |=3D TNF_SHARED; =20 - page_nid =3D page_to_nid(page); + nid =3D folio_nid(folio); /* * For memory tiering mode, cpupid of slow memory page is used * to record page access time. So use default value. */ if ((sysctl_numa_balancing_mode & NUMA_BALANCING_MEMORY_TIERING) && - !node_is_toptier(page_nid)) + !node_is_toptier(nid)) last_cpupid =3D (-1 & LAST_CPUPID_MASK); else - last_cpupid =3D page_cpupid_last(page); - target_nid =3D numa_migrate_prep(page, vma, vmf->address, page_nid, - &flags); + last_cpupid =3D page_cpupid_last(&folio->page); + target_nid =3D numa_migrate_prep(&folio->page, vma, vmf->address, nid, + &flags); if (target_nid =3D=3D NUMA_NO_NODE) { - put_page(page); + folio_put(folio); goto out_map; } pte_unmap_unlock(vmf->pte, vmf->ptl); writable =3D false; =20 /* Migrate to the requested node */ - if (migrate_misplaced_folio(page_folio(page), vma, target_nid)) { - page_nid =3D target_nid; + if (migrate_misplaced_folio(folio, vma, target_nid)) { + nid =3D target_nid; flags |=3D TNF_MIGRATED; } else { flags |=3D TNF_MIGRATE_FAIL; @@ -4842,8 +4842,8 @@ static vm_fault_t do_numa_page(struct vm_fault *vmf) } =20 out: - if (page_nid !=3D NUMA_NO_NODE) - task_numa_fault(last_cpupid, page_nid, 1, flags); + if (nid !=3D NUMA_NO_NODE) + task_numa_fault(last_cpupid, nid, 1, flags); return 0; out_map: /* --=20 2.27.0