From nobody Tue Dec 16 14:21:38 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 557ACE7D0A4 for ; Thu, 21 Sep 2023 18:51:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229687AbjIUSvb (ORCPT ); Thu, 21 Sep 2023 14:51:31 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:46466 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230296AbjIUSvB (ORCPT ); Thu, 21 Sep 2023 14:51:01 -0400 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 10571D5109 for ; Thu, 21 Sep 2023 11:41:44 -0700 (PDT) Received: from dggpemm100001.china.huawei.com (unknown [172.30.72.56]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4RrnQv46tyztSwc; Thu, 21 Sep 2023 15:41:43 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.31; Thu, 21 Sep 2023 15:45:58 +0800 From: Kefeng Wang To: Andrew Morton CC: , , , , , Zi Yan , Mike Kravetz , , Kefeng Wang Subject: [PATCH v2 5/6] mm: mempolicy: make mpol_misplaced() to take a folio Date: Thu, 21 Sep 2023 15:44:16 +0800 Message-ID: <20230921074417.24004-6-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20230921074417.24004-1-wangkefeng.wang@huawei.com> References: <20230921074417.24004-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To dggpemm100001.china.huawei.com (7.185.36.93) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" In preparation for large folio numa balancing, make mpol_misplaced() to take a folio, no functional change intended. Signed-off-by: Kefeng Wang --- include/linux/mempolicy.h | 5 +++-- mm/memory.c | 2 +- mm/mempolicy.c | 22 ++++++++++++---------- 3 files changed, 16 insertions(+), 13 deletions(-) diff --git a/include/linux/mempolicy.h b/include/linux/mempolicy.h index d232de7cdc56..6c2754d7bfed 100644 --- a/include/linux/mempolicy.h +++ b/include/linux/mempolicy.h @@ -174,7 +174,7 @@ extern void mpol_to_str(char *buffer, int maxlen, struc= t mempolicy *pol); /* Check if a vma is migratable */ extern bool vma_migratable(struct vm_area_struct *vma); =20 -extern int mpol_misplaced(struct page *, struct vm_area_struct *, unsigned= long); +int mpol_misplaced(struct folio *, struct vm_area_struct *, unsigned long); extern void mpol_put_task_policy(struct task_struct *); =20 static inline bool mpol_is_preferred_many(struct mempolicy *pol) @@ -278,7 +278,8 @@ static inline int mpol_parse_str(char *str, struct memp= olicy **mpol) } #endif =20 -static inline int mpol_misplaced(struct page *page, struct vm_area_struct = *vma, +static inline int mpol_misplaced(struct folio *folio, + struct vm_area_struct *vma, unsigned long address) { return -1; /* no node preference */ diff --git a/mm/memory.c b/mm/memory.c index 93ce8bcbe9d7..29c5618c91e5 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -4741,7 +4741,7 @@ int numa_migrate_prep(struct folio *folio, struct vm_= area_struct *vma, *flags |=3D TNF_FAULT_LOCAL; } =20 - return mpol_misplaced(&folio->page, vma, addr); + return mpol_misplaced(folio, vma, addr); } =20 static vm_fault_t do_numa_page(struct vm_fault *vmf) diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 98fae2bfc851..ecf06ce3a5dd 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -2572,24 +2572,25 @@ static void sp_free(struct sp_node *n) } =20 /** - * mpol_misplaced - check whether current page node is valid in policy + * mpol_misplaced - check whether current folio node is valid in policy * - * @page: page to be checked - * @vma: vm area where page mapped - * @addr: virtual address where page mapped + * @folio: folio to be checked + * @vma: vm area where folio mapped + * @addr: virtual address in @vma for shared policy lookup and interleave = policy * - * Lookup current policy node id for vma,addr and "compare to" page's + * Lookup current policy node id for vma,addr and "compare to" folio's * node id. Policy determination "mimics" alloc_page_vma(). * Called from fault path where we know the vma and faulting address. * * Return: NUMA_NO_NODE if the page is in a node that is valid for this - * policy, or a suitable node ID to allocate a replacement page from. + * policy, or a suitable node ID to allocate a replacement folio from. */ -int mpol_misplaced(struct page *page, struct vm_area_struct *vma, unsigned= long addr) +int mpol_misplaced(struct folio *folio, struct vm_area_struct *vma, + unsigned long addr) { struct mempolicy *pol; struct zoneref *z; - int curnid =3D page_to_nid(page); + int curnid =3D folio_nid(folio); unsigned long pgoff; int thiscpu =3D raw_smp_processor_id(); int thisnid =3D cpu_to_node(thiscpu); @@ -2645,11 +2646,12 @@ int mpol_misplaced(struct page *page, struct vm_are= a_struct *vma, unsigned long BUG(); } =20 - /* Migrate the page towards the node whose CPU is referencing it */ + /* Migrate the folio towards the node whose CPU is referencing it */ if (pol->flags & MPOL_F_MORON) { polnid =3D thisnid; =20 - if (!should_numa_migrate_memory(current, page, curnid, thiscpu)) + if (!should_numa_migrate_memory(current, &folio->page, curnid, + thiscpu)) goto out; } =20 --=20 2.27.0