From nobody Wed Dec 17 12:43:47 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id BE4E7EE49AB for ; Mon, 21 Aug 2023 11:45:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233176AbjHULpq (ORCPT ); Mon, 21 Aug 2023 07:45:46 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33546 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233079AbjHULpn (ORCPT ); Mon, 21 Aug 2023 07:45:43 -0400 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0DBD1FC for ; Mon, 21 Aug 2023 04:45:36 -0700 (PDT) Received: from dggpemm100001.china.huawei.com (unknown [172.30.72.53]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4RTrDH3LQcztSfY; Mon, 21 Aug 2023 19:41:51 +0800 (CST) Received: from localhost.localdomain.localdomain (10.175.113.25) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.31; Mon, 21 Aug 2023 19:45:33 +0800 From: Kefeng Wang To: Andrew Morton CC: , , , , , Zi Yan , Mike Kravetz , , Kefeng Wang Subject: [PATCH v2 1/8] mm: migrate: remove PageTransHuge check in numamigrate_isolate_page() Date: Mon, 21 Aug 2023 19:56:17 +0800 Message-ID: <20230821115624.158759-2-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230821115624.158759-1-wangkefeng.wang@huawei.com> References: <20230821115624.158759-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [10.175.113.25] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To dggpemm100001.china.huawei.com (7.185.36.93) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Since we begin to convert the numa migration code to use folio, which could let us to handle arbitrary sizes of folio, so drop assert that we only support PageTransHuge page(PMD size) when order > 0. Suggested-by: Matthew Wilcox (Oracle) Signed-off-by: Kefeng Wang --- mm/migrate.c | 2 -- 1 file changed, 2 deletions(-) diff --git a/mm/migrate.c b/mm/migrate.c index b7fa020003f3..646d8ee7f102 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -2483,8 +2483,6 @@ static int numamigrate_isolate_page(pg_data_t *pgdat,= struct page *page) int nr_pages =3D thp_nr_pages(page); int order =3D compound_order(page); =20 - VM_BUG_ON_PAGE(order && !PageTransHuge(page), page); - /* Do not migrate THP mapped by multiple processes */ if (PageTransHuge(page) && total_mapcount(page) > 1) return 0; --=20 2.41.0 From nobody Wed Dec 17 12:43:47 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 43693EE49A6 for ; Mon, 21 Aug 2023 11:45:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233247AbjHULps (ORCPT ); Mon, 21 Aug 2023 07:45:48 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33560 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233098AbjHULpn (ORCPT ); Mon, 21 Aug 2023 07:45:43 -0400 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 192FEFD for ; Mon, 21 Aug 2023 04:45:36 -0700 (PDT) Received: from dggpemm100001.china.huawei.com (unknown [172.30.72.53]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4RTrDS2k1tzNnSV; Mon, 21 Aug 2023 19:42:00 +0800 (CST) Received: from localhost.localdomain.localdomain (10.175.113.25) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.31; Mon, 21 Aug 2023 19:45:33 +0800 From: Kefeng Wang To: Andrew Morton CC: , , , , , Zi Yan , Mike Kravetz , , Kefeng Wang Subject: [PATCH v2 2/8] mm: migrate: remove THP mapcount check in numamigrate_isolate_page() Date: Mon, 21 Aug 2023 19:56:18 +0800 Message-ID: <20230821115624.158759-3-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230821115624.158759-1-wangkefeng.wang@huawei.com> References: <20230821115624.158759-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [10.175.113.25] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To dggpemm100001.china.huawei.com (7.185.36.93) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" The check of THP mapped by multiple processes was introduced by commit 04fa5d6a6547 ("mm: migrate: check page_count of THP before migrating") and refactor by commit 340ef3902cf2 ("mm: numa: cleanup flow of transhuge page migration"), which is out of date, since migrate_misplaced_page() is now using the standard migrate_pages() for small pages and THPs, the reference count checking is in folio_migrate_mapping(), so let's remove the special check for THP. Suggested-by: Matthew Wilcox (Oracle) Signed-off-by: Kefeng Wang Reviewed-by: "Huang, Ying" --- mm/migrate.c | 4 ---- 1 file changed, 4 deletions(-) diff --git a/mm/migrate.c b/mm/migrate.c index 646d8ee7f102..f2d86dfd8423 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -2483,10 +2483,6 @@ static int numamigrate_isolate_page(pg_data_t *pgdat= , struct page *page) int nr_pages =3D thp_nr_pages(page); int order =3D compound_order(page); =20 - /* Do not migrate THP mapped by multiple processes */ - if (PageTransHuge(page) && total_mapcount(page) > 1) - return 0; - /* Avoid migrating to a node that is nearly full */ if (!migrate_balanced_pgdat(pgdat, nr_pages)) { int z; --=20 2.41.0 From nobody Wed Dec 17 12:43:47 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3D470EE49AB for ; Mon, 21 Aug 2023 11:45:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233281AbjHULpu (ORCPT ); Mon, 21 Aug 2023 07:45:50 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33568 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233130AbjHULpo (ORCPT ); Mon, 21 Aug 2023 07:45:44 -0400 Received: from szxga01-in.huawei.com (szxga01-in.huawei.com [45.249.212.187]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EF26A100 for ; Mon, 21 Aug 2023 04:45:36 -0700 (PDT) Received: from dggpemm100001.china.huawei.com (unknown [172.30.72.57]) by szxga01-in.huawei.com (SkyGuard) with ESMTP id 4RTrGv0Z7QzrSwY; Mon, 21 Aug 2023 19:44:07 +0800 (CST) Received: from localhost.localdomain.localdomain (10.175.113.25) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.31; Mon, 21 Aug 2023 19:45:34 +0800 From: Kefeng Wang To: Andrew Morton CC: , , , , , Zi Yan , Mike Kravetz , , Kefeng Wang Subject: [PATCH v2 3/8] mm: migrate: convert numamigrate_isolate_page() to numamigrate_isolate_folio() Date: Mon, 21 Aug 2023 19:56:19 +0800 Message-ID: <20230821115624.158759-4-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230821115624.158759-1-wangkefeng.wang@huawei.com> References: <20230821115624.158759-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [10.175.113.25] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To dggpemm100001.china.huawei.com (7.185.36.93) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Rename numamigrate_isolate_page() to numamigrate_isolate_folio(), then make it takes a folio and use folio API to save compound_head() calls. Signed-off-by: Kefeng Wang --- mm/migrate.c | 20 ++++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/mm/migrate.c b/mm/migrate.c index f2d86dfd8423..281eafdf8e63 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -2478,10 +2478,9 @@ static struct folio *alloc_misplaced_dst_folio(struc= t folio *src, return __folio_alloc_node(gfp, order, nid); } =20 -static int numamigrate_isolate_page(pg_data_t *pgdat, struct page *page) +static int numamigrate_isolate_folio(pg_data_t *pgdat, struct folio *folio) { - int nr_pages =3D thp_nr_pages(page); - int order =3D compound_order(page); + int nr_pages =3D folio_nr_pages(folio); =20 /* Avoid migrating to a node that is nearly full */ if (!migrate_balanced_pgdat(pgdat, nr_pages)) { @@ -2493,22 +2492,23 @@ static int numamigrate_isolate_page(pg_data_t *pgda= t, struct page *page) if (managed_zone(pgdat->node_zones + z)) break; } - wakeup_kswapd(pgdat->node_zones + z, 0, order, ZONE_MOVABLE); + wakeup_kswapd(pgdat->node_zones + z, 0, + folio_order(folio), ZONE_MOVABLE); return 0; } =20 - if (!isolate_lru_page(page)) + if (!folio_isolate_lru(folio)) return 0; =20 - mod_node_page_state(page_pgdat(page), NR_ISOLATED_ANON + page_is_file_lru= (page), + node_stat_mod_folio(folio, NR_ISOLATED_ANON + folio_is_file_lru(folio), nr_pages); =20 /* - * Isolating the page has taken another reference, so the - * caller's reference can be safely dropped without the page + * Isolating the folio has taken another reference, so the + * caller's reference can be safely dropped without the folio * disappearing underneath us during migration. */ - put_page(page); + folio_put(folio); return 1; } =20 @@ -2542,7 +2542,7 @@ int migrate_misplaced_page(struct page *page, struct = vm_area_struct *vma, if (page_is_file_lru(page) && PageDirty(page)) goto out; =20 - isolated =3D numamigrate_isolate_page(pgdat, page); + isolated =3D numamigrate_isolate_folio(pgdat, page_folio(page)); if (!isolated) goto out; =20 --=20 2.41.0 From nobody Wed Dec 17 12:43:47 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 91C60EE4996 for ; Mon, 21 Aug 2023 11:45:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229866AbjHULp5 (ORCPT ); Mon, 21 Aug 2023 07:45:57 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33606 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233288AbjHULpv (ORCPT ); Mon, 21 Aug 2023 07:45:51 -0400 Received: from szxga08-in.huawei.com (szxga08-in.huawei.com [45.249.212.255]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7C6E5F0 for ; Mon, 21 Aug 2023 04:45:37 -0700 (PDT) Received: from dggpemm100001.china.huawei.com (unknown [172.30.72.53]) by szxga08-in.huawei.com (SkyGuard) with ESMTP id 4RTrGw2d0Lz1L9R0; Mon, 21 Aug 2023 19:44:08 +0800 (CST) Received: from localhost.localdomain.localdomain (10.175.113.25) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.31; Mon, 21 Aug 2023 19:45:34 +0800 From: Kefeng Wang To: Andrew Morton CC: , , , , , Zi Yan , Mike Kravetz , , Kefeng Wang Subject: [PATCH v2 4/8] mm: migrate: use a folio in migrate_misplaced_page() Date: Mon, 21 Aug 2023 19:56:20 +0800 Message-ID: <20230821115624.158759-5-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230821115624.158759-1-wangkefeng.wang@huawei.com> References: <20230821115624.158759-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [10.175.113.25] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To dggpemm100001.china.huawei.com (7.185.36.93) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Use a folio in migrate_misplaced_page() to save compound_head() calls. Signed-off-by: Kefeng Wang --- mm/migrate.c | 23 ++++++++++++----------- 1 file changed, 12 insertions(+), 11 deletions(-) diff --git a/mm/migrate.c b/mm/migrate.c index 281eafdf8e63..fc728f9a383f 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -2521,17 +2521,18 @@ int migrate_misplaced_page(struct page *page, struc= t vm_area_struct *vma, int node) { pg_data_t *pgdat =3D NODE_DATA(node); + struct folio *folio =3D page_folio(page); int isolated; int nr_remaining; unsigned int nr_succeeded; LIST_HEAD(migratepages); - int nr_pages =3D thp_nr_pages(page); + int nr_pages =3D folio_nr_pages(folio); =20 /* * Don't migrate file pages that are mapped in multiple processes * with execute permissions as they are probably shared libraries. */ - if (page_mapcount(page) !=3D 1 && page_is_file_lru(page) && + if (page_mapcount(page) !=3D 1 && folio_is_file_lru(folio) && (vma->vm_flags & VM_EXEC)) goto out; =20 @@ -2539,29 +2540,29 @@ int migrate_misplaced_page(struct page *page, struc= t vm_area_struct *vma, * Also do not migrate dirty pages as not all filesystems can move * dirty pages in MIGRATE_ASYNC mode which is a waste of cycles. */ - if (page_is_file_lru(page) && PageDirty(page)) + if (folio_is_file_lru(folio) && folio_test_dirty(folio)) goto out; =20 - isolated =3D numamigrate_isolate_folio(pgdat, page_folio(page)); + isolated =3D numamigrate_isolate_folio(pgdat, folio); if (!isolated) goto out; =20 - list_add(&page->lru, &migratepages); + list_add(&folio->lru, &migratepages); nr_remaining =3D migrate_pages(&migratepages, alloc_misplaced_dst_folio, NULL, node, MIGRATE_ASYNC, MR_NUMA_MISPLACED, &nr_succeeded); if (nr_remaining) { if (!list_empty(&migratepages)) { - list_del(&page->lru); - mod_node_page_state(page_pgdat(page), NR_ISOLATED_ANON + - page_is_file_lru(page), -nr_pages); - putback_lru_page(page); + list_del(&folio->lru); + node_stat_mod_folio(folio, NR_ISOLATED_ANON + + folio_is_file_lru(folio), -nr_pages); + folio_putback_lru(folio); } isolated =3D 0; } if (nr_succeeded) { count_vm_numa_events(NUMA_PAGE_MIGRATE, nr_succeeded); - if (!node_is_toptier(page_to_nid(page)) && node_is_toptier(node)) + if (!node_is_toptier(folio_nid(folio)) && node_is_toptier(node)) mod_node_page_state(pgdat, PGPROMOTE_SUCCESS, nr_succeeded); } @@ -2569,7 +2570,7 @@ int migrate_misplaced_page(struct page *page, struct = vm_area_struct *vma, return isolated; =20 out: - put_page(page); + folio_put(folio); return 0; } #endif /* CONFIG_NUMA_BALANCING */ --=20 2.41.0 From nobody Wed Dec 17 12:43:47 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id AEC8BEE49A6 for ; Mon, 21 Aug 2023 11:45:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233346AbjHULpw (ORCPT ); Mon, 21 Aug 2023 07:45:52 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33620 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233098AbjHULps (ORCPT ); Mon, 21 Aug 2023 07:45:48 -0400 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D75C8EC for ; Mon, 21 Aug 2023 04:45:37 -0700 (PDT) Received: from dggpemm100001.china.huawei.com (unknown [172.30.72.53]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4RTrDT6tzyzNnKL; Mon, 21 Aug 2023 19:42:01 +0800 (CST) Received: from localhost.localdomain.localdomain (10.175.113.25) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.31; Mon, 21 Aug 2023 19:45:35 +0800 From: Kefeng Wang To: Andrew Morton CC: , , , , , Zi Yan , Mike Kravetz , , Kefeng Wang Subject: [PATCH v2 5/8] mm: migrate: use __folio_test_movable() Date: Mon, 21 Aug 2023 19:56:21 +0800 Message-ID: <20230821115624.158759-6-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230821115624.158759-1-wangkefeng.wang@huawei.com> References: <20230821115624.158759-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [10.175.113.25] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To dggpemm100001.china.huawei.com (7.185.36.93) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Use __folio_test_movable(), no need to convert from folio to page again. Reviewed-by: Matthew Wilcox (Oracle) Reviewed-by: David Hildenbrand Signed-off-by: Kefeng Wang --- mm/migrate.c | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/mm/migrate.c b/mm/migrate.c index fc728f9a383f..b715cd59bdec 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -157,8 +157,8 @@ void putback_movable_pages(struct list_head *l) list_del(&folio->lru); /* * We isolated non-lru movable folio so here we can use - * __PageMovable because LRU folio's mapping cannot have - * PAGE_MAPPING_MOVABLE. + * __folio_test_movable because LRU folio's mapping cannot + * have PAGE_MAPPING_MOVABLE. */ if (unlikely(__folio_test_movable(folio))) { VM_BUG_ON_FOLIO(!folio_test_isolated(folio), folio); @@ -943,7 +943,7 @@ static int move_to_new_folio(struct folio *dst, struct = folio *src, enum migrate_mode mode) { int rc =3D -EAGAIN; - bool is_lru =3D !__PageMovable(&src->page); + bool is_lru =3D !__folio_test_movable(src); =20 VM_BUG_ON_FOLIO(!folio_test_locked(src), src); VM_BUG_ON_FOLIO(!folio_test_locked(dst), dst); @@ -990,7 +990,7 @@ static int move_to_new_folio(struct folio *dst, struct = folio *src, * src is freed; but stats require that PageAnon be left as PageAnon. */ if (rc =3D=3D MIGRATEPAGE_SUCCESS) { - if (__PageMovable(&src->page)) { + if (__folio_test_movable(src)) { VM_BUG_ON_FOLIO(!folio_test_isolated(src), src); =20 /* @@ -1082,7 +1082,7 @@ static void migrate_folio_done(struct folio *src, /* * Compaction can migrate also non-LRU pages which are * not accounted to NR_ISOLATED_*. They can be recognized - * as __PageMovable + * as __folio_test_movable */ if (likely(!__folio_test_movable(src))) mod_node_page_state(folio_pgdat(src), NR_ISOLATED_ANON + @@ -1103,7 +1103,7 @@ static int migrate_folio_unmap(new_folio_t get_new_fo= lio, int rc =3D -EAGAIN; int page_was_mapped =3D 0; struct anon_vma *anon_vma =3D NULL; - bool is_lru =3D !__PageMovable(&src->page); + bool is_lru =3D !__folio_test_movable(src); bool locked =3D false; bool dst_locked =3D false; =20 @@ -1261,7 +1261,7 @@ static int migrate_folio_move(free_folio_t put_new_fo= lio, unsigned long private, int rc; int page_was_mapped =3D 0; struct anon_vma *anon_vma =3D NULL; - bool is_lru =3D !__PageMovable(&src->page); + bool is_lru =3D !__folio_test_movable(src); struct list_head *prev; =20 __migrate_folio_extract(dst, &page_was_mapped, &anon_vma); --=20 2.41.0 From nobody Wed Dec 17 12:43:47 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 30004EE4996 for ; Mon, 21 Aug 2023 11:45:54 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233273AbjHULpx (ORCPT ); Mon, 21 Aug 2023 07:45:53 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33650 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233267AbjHULps (ORCPT ); Mon, 21 Aug 2023 07:45:48 -0400 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3AAFDF3 for ; Mon, 21 Aug 2023 04:45:38 -0700 (PDT) Received: from dggpemm100001.china.huawei.com (unknown [172.30.72.53]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4RTrDV3ZNWzNnSY; Mon, 21 Aug 2023 19:42:02 +0800 (CST) Received: from localhost.localdomain.localdomain (10.175.113.25) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.31; Mon, 21 Aug 2023 19:45:35 +0800 From: Kefeng Wang To: Andrew Morton CC: , , , , , Zi Yan , Mike Kravetz , , Kefeng Wang Subject: [PATCH v2 6/8] mm: migrate: use a folio in add_page_for_migration() Date: Mon, 21 Aug 2023 19:56:22 +0800 Message-ID: <20230821115624.158759-7-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230821115624.158759-1-wangkefeng.wang@huawei.com> References: <20230821115624.158759-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [10.175.113.25] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To dggpemm100001.china.huawei.com (7.185.36.93) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Use a folio in add_page_for_migration() to save compound_head() calls. Signed-off-by: Kefeng Wang --- mm/migrate.c | 40 +++++++++++++++++++--------------------- 1 file changed, 19 insertions(+), 21 deletions(-) diff --git a/mm/migrate.c b/mm/migrate.c index b715cd59bdec..73572d5a5cd4 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -2057,6 +2057,7 @@ static int add_page_for_migration(struct mm_struct *m= m, const void __user *p, struct vm_area_struct *vma; unsigned long addr; struct page *page; + struct folio *folio; int err; bool isolated; =20 @@ -2079,45 +2080,42 @@ static int add_page_for_migration(struct mm_struct = *mm, const void __user *p, if (!page) goto out; =20 - if (is_zone_device_page(page)) - goto out_putpage; + folio =3D page_folio(page); + if (folio_is_zone_device(folio)) + goto out_putfolio; =20 err =3D 0; - if (page_to_nid(page) =3D=3D node) - goto out_putpage; + if (folio_nid(folio) =3D=3D node) + goto out_putfolio; =20 err =3D -EACCES; if (page_mapcount(page) > 1 && !migrate_all) - goto out_putpage; + goto out_putfolio; =20 - if (PageHuge(page)) { + if (folio_test_hugetlb(folio)) { if (PageHead(page)) { - isolated =3D isolate_hugetlb(page_folio(page), pagelist); + isolated =3D isolate_hugetlb(folio, pagelist); err =3D isolated ? 1 : -EBUSY; } } else { - struct page *head; - - head =3D compound_head(page); - isolated =3D isolate_lru_page(head); + isolated =3D folio_isolate_lru(folio); if (!isolated) { err =3D -EBUSY; - goto out_putpage; + goto out_putfolio; } =20 err =3D 1; - list_add_tail(&head->lru, pagelist); - mod_node_page_state(page_pgdat(head), - NR_ISOLATED_ANON + page_is_file_lru(head), - thp_nr_pages(head)); + list_add_tail(&folio->lru, pagelist); + node_stat_mod_folio(folio, + NR_ISOLATED_ANON + folio_is_file_lru(folio), + folio_nr_pages(folio)); } -out_putpage: +out_putfolio: /* - * Either remove the duplicate refcount from - * isolate_lru_page() or drop the page ref if it was - * not isolated. + * Either remove the duplicate refcount from folio_isolate_lru() + * or drop the folio ref if it was not isolated. */ - put_page(page); + folio_put(folio); out: mmap_read_unlock(mm); return err; --=20 2.41.0 From nobody Wed Dec 17 12:43:47 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C16D7EE49A6 for ; Mon, 21 Aug 2023 11:45:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233422AbjHULpz (ORCPT ); Mon, 21 Aug 2023 07:45:55 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33654 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233173AbjHULpt (ORCPT ); Mon, 21 Aug 2023 07:45:49 -0400 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CB4EDE2 for ; Mon, 21 Aug 2023 04:45:38 -0700 (PDT) Received: from dggpemm100001.china.huawei.com (unknown [172.30.72.54]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4RTrDW04YzzNnSZ; Mon, 21 Aug 2023 19:42:03 +0800 (CST) Received: from localhost.localdomain.localdomain (10.175.113.25) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.31; Mon, 21 Aug 2023 19:45:36 +0800 From: Kefeng Wang To: Andrew Morton CC: , , , , , Zi Yan , Mike Kravetz , , Kefeng Wang Subject: [PATCH v2 7/8] mm: migrate: remove PageHead() check for HugeTLB in add_page_for_migration() Date: Mon, 21 Aug 2023 19:56:23 +0800 Message-ID: <20230821115624.158759-8-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230821115624.158759-1-wangkefeng.wang@huawei.com> References: <20230821115624.158759-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [10.175.113.25] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To dggpemm100001.china.huawei.com (7.185.36.93) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" There is some different between hugeTLB and THP behave when passed the address of a tail page, for THP, it will migrate the entire THP page, but for HugeTLB, it will return -EACCES, or -ENOENT before commit e66f17ff7177 ("mm/hugetlb: take page table lock in follow_huge_pmd()"), -EACCES The page is mapped by multiple processes and can be moved only if MPOL_MF_MOVE_ALL is specified. -ENOENT The page is not present. But when check manual[1], both of the two errnos are not suitable, it is better to keep the same behave between hugetlb and THP when passed the address of a tail page, so let's just remove the PageHead() check for HugeTLB. [1] https://man7.org/linux/man-pages/man2/move_pages.2.html Suggested-by: Mike Kravetz Signed-off-by: Kefeng Wang Acked-by: Zi Yan --- mm/migrate.c | 6 ++---- 1 file changed, 2 insertions(+), 4 deletions(-) diff --git a/mm/migrate.c b/mm/migrate.c index 73572d5a5cd4..e8c3fb8974f9 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -2093,10 +2093,8 @@ static int add_page_for_migration(struct mm_struct *= mm, const void __user *p, goto out_putfolio; =20 if (folio_test_hugetlb(folio)) { - if (PageHead(page)) { - isolated =3D isolate_hugetlb(folio, pagelist); - err =3D isolated ? 1 : -EBUSY; - } + isolated =3D isolate_hugetlb(folio, pagelist); + err =3D isolated ? 1 : -EBUSY; } else { isolated =3D folio_isolate_lru(folio); if (!isolated) { --=20 2.41.0 From nobody Wed Dec 17 12:43:47 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 03ECBEE49AA for ; Mon, 21 Aug 2023 11:45:56 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233451AbjHULp4 (ORCPT ); Mon, 21 Aug 2023 07:45:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:33620 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S233300AbjHULpv (ORCPT ); Mon, 21 Aug 2023 07:45:51 -0400 Received: from szxga08-in.huawei.com (szxga08-in.huawei.com [45.249.212.255]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id D251CF9 for ; Mon, 21 Aug 2023 04:45:39 -0700 (PDT) Received: from dggpemm100001.china.huawei.com (unknown [172.30.72.54]) by szxga08-in.huawei.com (SkyGuard) with ESMTP id 4RTrGy70myz1L9Rn; Mon, 21 Aug 2023 19:44:10 +0800 (CST) Received: from localhost.localdomain.localdomain (10.175.113.25) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.31; Mon, 21 Aug 2023 19:45:36 +0800 From: Kefeng Wang To: Andrew Morton CC: , , , , , Zi Yan , Mike Kravetz , , Kefeng Wang Subject: [PATCH v2 8/8] mm: migrate: remove isolated variable in add_page_for_migration() Date: Mon, 21 Aug 2023 19:56:24 +0800 Message-ID: <20230821115624.158759-9-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.41.0 In-Reply-To: <20230821115624.158759-1-wangkefeng.wang@huawei.com> References: <20230821115624.158759-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [10.175.113.25] X-ClientProxiedBy: dggems702-chm.china.huawei.com (10.3.19.179) To dggpemm100001.china.huawei.com (7.185.36.93) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Directly check the return of isolate_hugetlb() and folio_isolate_lru() to remove isolated variable, also setup err =3D -EBUSY in advance before isolation, and update err only when successfully queued for migration, which could help us to unify and simplify code a bit. Signed-off-by: Kefeng Wang --- mm/migrate.c | 11 ++++------- 1 file changed, 4 insertions(+), 7 deletions(-) diff --git a/mm/migrate.c b/mm/migrate.c index e8c3fb8974f9..9bbd9018ece7 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -2059,7 +2059,6 @@ static int add_page_for_migration(struct mm_struct *m= m, const void __user *p, struct page *page; struct folio *folio; int err; - bool isolated; =20 mmap_read_lock(mm); addr =3D (unsigned long)untagged_addr_remote(mm, p); @@ -2092,15 +2091,13 @@ static int add_page_for_migration(struct mm_struct = *mm, const void __user *p, if (page_mapcount(page) > 1 && !migrate_all) goto out_putfolio; =20 + err =3D -EBUSY; if (folio_test_hugetlb(folio)) { - isolated =3D isolate_hugetlb(folio, pagelist); - err =3D isolated ? 1 : -EBUSY; + if (isolate_hugetlb(folio, pagelist)) + err =3D 1; } else { - isolated =3D folio_isolate_lru(folio); - if (!isolated) { - err =3D -EBUSY; + if (!folio_isolate_lru(folio)) goto out_putfolio; - } =20 err =3D 1; list_add_tail(&folio->lru, pagelist); --=20 2.41.0