From nobody Tue Dec 16 07:12:35 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2E53FCD13D8 for ; Mon, 18 Sep 2023 10:33:58 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S241314AbjIRKdi (ORCPT ); Mon, 18 Sep 2023 06:33:38 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:38486 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S241330AbjIRKdN (ORCPT ); Mon, 18 Sep 2023 06:33:13 -0400 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 2DE8E123 for ; Mon, 18 Sep 2023 03:32:49 -0700 (PDT) Received: from dggpemm100001.china.huawei.com (unknown [172.30.72.56]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4Rq1JJ308ZzVky6; Mon, 18 Sep 2023 18:29:52 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.31; Mon, 18 Sep 2023 18:32:47 +0800 From: Kefeng Wang To: Andrew Morton CC: , , , , , Zi Yan , Mike Kravetz , , Kefeng Wang Subject: [PATCH 2/6] mm: mempolicy: make mpol_misplaced() to take a folio Date: Mon, 18 Sep 2023 18:32:09 +0800 Message-ID: <20230918103213.4166210-3-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20230918103213.4166210-1-wangkefeng.wang@huawei.com> References: <20230918103213.4166210-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems706-chm.china.huawei.com (10.3.19.183) To dggpemm100001.china.huawei.com (7.185.36.93) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" In preparation for large folio numa balancing, make mpol_misplaced() to take a folio, no functional change intended. Signed-off-by: Kefeng Wang --- include/linux/mempolicy.h | 4 ++-- mm/memory.c | 2 +- mm/mempolicy.c | 21 ++++++++++----------- 3 files changed, 13 insertions(+), 14 deletions(-) diff --git a/include/linux/mempolicy.h b/include/linux/mempolicy.h index d232de7cdc56..4a82eee20073 100644 --- a/include/linux/mempolicy.h +++ b/include/linux/mempolicy.h @@ -174,7 +174,7 @@ extern void mpol_to_str(char *buffer, int maxlen, struc= t mempolicy *pol); /* Check if a vma is migratable */ extern bool vma_migratable(struct vm_area_struct *vma); =20 -extern int mpol_misplaced(struct page *, struct vm_area_struct *, unsigned= long); +extern int mpol_misplaced(struct folio *, struct vm_area_struct *, unsigne= d long); extern void mpol_put_task_policy(struct task_struct *); =20 static inline bool mpol_is_preferred_many(struct mempolicy *pol) @@ -278,7 +278,7 @@ static inline int mpol_parse_str(char *str, struct memp= olicy **mpol) } #endif =20 -static inline int mpol_misplaced(struct page *page, struct vm_area_struct = *vma, +static inline int mpol_misplaced(struct folio *folio, struct vm_area_struc= t *vma, unsigned long address) { return -1; /* no node preference */ diff --git a/mm/memory.c b/mm/memory.c index 983a40f8ee62..a04c90604c73 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4731,7 +4731,7 @@ int numa_migrate_prep(struct page *page, struct vm_ar= ea_struct *vma, *flags |=3D TNF_FAULT_LOCAL; } =20 - return mpol_misplaced(page, vma, addr); + return mpol_misplaced(page_folio(page), vma, addr); } =20 static vm_fault_t do_numa_page(struct vm_fault *vmf) diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 39584dc25c84..14a223b68180 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -2565,24 +2565,24 @@ static void sp_free(struct sp_node *n) } =20 /** - * mpol_misplaced - check whether current page node is valid in policy + * mpol_misplaced - check whether current folio node is valid in policy * - * @page: page to be checked - * @vma: vm area where page mapped - * @addr: virtual address where page mapped + * @folio: folio to be checked + * @vma: vm area where folio mapped + * @addr: virtual address in @vma for shared policy lookup and interleave = policy * - * Lookup current policy node id for vma,addr and "compare to" page's + * Lookup current policy node id for vma,addr and "compare to" folio's * node id. Policy determination "mimics" alloc_page_vma(). * Called from fault path where we know the vma and faulting address. * * Return: NUMA_NO_NODE if the page is in a node that is valid for this - * policy, or a suitable node ID to allocate a replacement page from. + * policy, or a suitable node ID to allocate a replacement folio from. */ -int mpol_misplaced(struct page *page, struct vm_area_struct *vma, unsigned= long addr) +int mpol_misplaced(struct folio *folio, struct vm_area_struct *vma, unsign= ed long addr) { struct mempolicy *pol; struct zoneref *z; - int curnid =3D page_to_nid(page); + int curnid =3D folio_nid(folio); unsigned long pgoff; int thiscpu =3D raw_smp_processor_id(); int thisnid =3D cpu_to_node(thiscpu); @@ -2638,12 +2638,11 @@ int mpol_misplaced(struct page *page, struct vm_are= a_struct *vma, unsigned long BUG(); } =20 - /* Migrate the page towards the node whose CPU is referencing it */ + /* Migrate the folio towards the node whose CPU is referencing it */ if (pol->flags & MPOL_F_MORON) { polnid =3D thisnid; =20 - if (!should_numa_migrate_memory(current, page_folio(page), - curnid, thiscpu)) + if (!should_numa_migrate_memory(current, folio, curnid, thiscpu)) goto out; } =20 --=20 2.27.0