From nobody Thu Apr 2 21:37:15 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id E03ACC6FA82 for ; Wed, 21 Sep 2022 06:07:09 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229741AbiIUGHH (ORCPT ); Wed, 21 Sep 2022 02:07:07 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:60728 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229519AbiIUGG5 (ORCPT ); Wed, 21 Sep 2022 02:06:57 -0400 Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 7F8D67C1CE for ; Tue, 20 Sep 2022 23:06:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1663740416; x=1695276416; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=MrLzv8Dvhk4JFF3omMcaNuJrUuoPDcPH4H019ndQy9c=; b=k4io90mjWC6QzuWQ0f0mTwofgix6LVg/LnPbk9DjTpKSkxkRFqG8ZOgO bl+eaGCEI4kGsb8KIgIkiWREEFWrrrncYgNC4piZf8Nt0Ds0EU1zdRSAp 7vrTCmCtW6GOf7QlHe5otyOjH7TfAgnUIRXC3igy7jQgmdUZ3QyKOlJf0 Uu3hOxmcPxC/CKMGRwppzVw/yiHEsR7Yzaw2MhgHmnqILmAio+m95ZEg+ 8BqnZDsL7NiDL7mCNQhkct3PTqvUmaj/cde8pI+vDhoTNZyDaz2fMJP0F GqXz1jkyi2POqextZAoCJKQird2ly6ehTpVLO0BI/hvlLhS8Kvwt51/cG g==; X-IronPort-AV: E=McAfee;i="6500,9779,10476"; a="280284839" X-IronPort-AV: E=Sophos;i="5.93,332,1654585200"; d="scan'208";a="280284839" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Sep 2022 23:06:56 -0700 X-IronPort-AV: E=Sophos;i="5.93,332,1654585200"; d="scan'208";a="649913857" Received: from yhuang6-mobl2.sh.intel.com ([10.238.5.245]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 Sep 2022 23:06:53 -0700 From: Huang Ying To: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org, Andrew Morton , Huang Ying , Zi Yan , Yang Shi , Baolin Wang , Oscar Salvador , Matthew Wilcox Subject: [RFC 2/6] mm/migrate_pages: split unmap_and_move() to _unmap() and _move() Date: Wed, 21 Sep 2022 14:06:12 +0800 Message-Id: <20220921060616.73086-3-ying.huang@intel.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220921060616.73086-1-ying.huang@intel.com> References: <20220921060616.73086-1-ying.huang@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" This is a preparation patch to batch the page unmapping and moving for the normal pages and THP. In this patch, unmap_and_move() is split to migrate_page_unmap() and migrate_page_move(). So, we can batch _unmap() and _move() in different loops later. To pass some information between unmap and move, the original unused newpage->mapping and newpage->private are used. Signed-off-by: "Huang, Ying" Cc: Zi Yan Cc: Yang Shi Cc: Baolin Wang Cc: Oscar Salvador Cc: Matthew Wilcox Reviewed-by: Baolin Wang --- mm/migrate.c | 164 ++++++++++++++++++++++++++++++++++++++------------- 1 file changed, 122 insertions(+), 42 deletions(-) diff --git a/mm/migrate.c b/mm/migrate.c index 117134f1c6dc..4a81e0bfdbcd 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -976,13 +976,32 @@ static int move_to_new_folio(struct folio *dst, struc= t folio *src, return rc; } =20 -static int __unmap_and_move(struct page *page, struct page *newpage, +static void __migrate_page_record(struct page *newpage, + int page_was_mapped, + struct anon_vma *anon_vma) +{ + newpage->mapping =3D (struct address_space *)anon_vma; + newpage->private =3D page_was_mapped; +} + +static void __migrate_page_extract(struct page *newpage, + int *page_was_mappedp, + struct anon_vma **anon_vmap) +{ + *anon_vmap =3D (struct anon_vma *)newpage->mapping; + *page_was_mappedp =3D newpage->private; + newpage->mapping =3D NULL; + newpage->private =3D 0; +} + +#define MIGRATEPAGE_UNMAP 1 + +static int __migrate_page_unmap(struct page *page, struct page *newpage, int force, enum migrate_mode mode) { struct folio *folio =3D page_folio(page); - struct folio *dst =3D page_folio(newpage); int rc =3D -EAGAIN; - bool page_was_mapped =3D false; + int page_was_mapped =3D 0; struct anon_vma *anon_vma =3D NULL; bool is_lru =3D !__PageMovable(page); =20 @@ -1058,8 +1077,8 @@ static int __unmap_and_move(struct page *page, struct= page *newpage, goto out_unlock; =20 if (unlikely(!is_lru)) { - rc =3D move_to_new_folio(dst, folio, mode); - goto out_unlock_both; + __migrate_page_record(newpage, page_was_mapped, anon_vma); + return MIGRATEPAGE_UNMAP; } =20 /* @@ -1085,11 +1104,41 @@ static int __unmap_and_move(struct page *page, stru= ct page *newpage, VM_BUG_ON_PAGE(PageAnon(page) && !PageKsm(page) && !anon_vma, page); try_to_migrate(folio, 0); - page_was_mapped =3D true; + page_was_mapped =3D 1; + } + + if (!page_mapped(page)) { + __migrate_page_record(newpage, page_was_mapped, anon_vma); + return MIGRATEPAGE_UNMAP; } =20 - if (!page_mapped(page)) - rc =3D move_to_new_folio(dst, folio, mode); + if (page_was_mapped) + remove_migration_ptes(folio, folio, false); + +out_unlock_both: + unlock_page(newpage); +out_unlock: + /* Drop an anon_vma reference if we took one */ + if (anon_vma) + put_anon_vma(anon_vma); + unlock_page(page); +out: + + return rc; +} + +static int __migrate_page_move(struct page *page, struct page *newpage, + enum migrate_mode mode) +{ + struct folio *folio =3D page_folio(page); + struct folio *dst =3D page_folio(newpage); + int rc; + int page_was_mapped =3D 0; + struct anon_vma *anon_vma =3D NULL; + + __migrate_page_extract(newpage, &page_was_mapped, &anon_vma); + + rc =3D move_to_new_folio(dst, folio, mode); =20 /* * When successful, push newpage to LRU immediately: so that if it @@ -1110,14 +1159,11 @@ static int __unmap_and_move(struct page *page, stru= ct page *newpage, remove_migration_ptes(folio, rc =3D=3D MIGRATEPAGE_SUCCESS ? dst : folio, false); =20 -out_unlock_both: unlock_page(newpage); -out_unlock: /* Drop an anon_vma reference if we took one */ if (anon_vma) put_anon_vma(anon_vma); unlock_page(page); -out: /* * If migration is successful, decrease refcount of the newpage, * which will not free the page because new page owner increased @@ -1129,18 +1175,31 @@ static int __unmap_and_move(struct page *page, stru= ct page *newpage, return rc; } =20 -/* - * Obtain the lock on page, remove all ptes and migrate the page - * to the newly allocated page in newpage. - */ -static int unmap_and_move(new_page_t get_new_page, - free_page_t put_new_page, - unsigned long private, struct page *page, - int force, enum migrate_mode mode, - enum migrate_reason reason, - struct list_head *ret) +static void migrate_page_done(struct page *page, + enum migrate_reason reason) +{ + /* + * Compaction can migrate also non-LRU pages which are + * not accounted to NR_ISOLATED_*. They can be recognized + * as __PageMovable + */ + if (likely(!__PageMovable(page))) + mod_node_page_state(page_pgdat(page), NR_ISOLATED_ANON + + page_is_file_lru(page), -thp_nr_pages(page)); + + if (reason !=3D MR_MEMORY_FAILURE) + /* We release the page in page_handle_poison. */ + put_page(page); +} + +/* Obtain the lock on page, remove all ptes. */ +static int migrate_page_unmap(new_page_t get_new_page, free_page_t put_new= _page, + unsigned long private, struct page *page, + struct page **newpagep, int force, + enum migrate_mode mode, enum migrate_reason reason, + struct list_head *ret) { - int rc =3D MIGRATEPAGE_SUCCESS; + int rc =3D MIGRATEPAGE_UNMAP; struct page *newpage =3D NULL; =20 if (!thp_migration_supported() && PageTransHuge(page)) @@ -1151,19 +1210,48 @@ static int unmap_and_move(new_page_t get_new_page, ClearPageActive(page); ClearPageUnevictable(page); /* free_pages_prepare() will clear PG_isolated. */ - goto out; + list_del(&page->lru); + migrate_page_done(page, reason); + return MIGRATEPAGE_SUCCESS; } =20 newpage =3D get_new_page(page, private); if (!newpage) return -ENOMEM; + *newpagep =3D newpage; =20 - newpage->private =3D 0; - rc =3D __unmap_and_move(page, newpage, force, mode); + rc =3D __migrate_page_unmap(page, newpage, force, mode); + if (rc =3D=3D MIGRATEPAGE_UNMAP) + return rc; + + /* + * A page that has not been migrated will have kept its + * references and be restored. + */ + /* restore the page to right list. */ + if (rc !=3D -EAGAIN) + list_move_tail(&page->lru, ret); + + if (put_new_page) + put_new_page(newpage, private); + else + put_page(newpage); + + return rc; +} + +/* Migrate the page to the newly allocated page in newpage. */ +static int migrate_page_move(free_page_t put_new_page, unsigned long priva= te, + struct page *page, struct page *newpage, + enum migrate_mode mode, enum migrate_reason reason, + struct list_head *ret) +{ + int rc; + + rc =3D __migrate_page_move(page, newpage, mode); if (rc =3D=3D MIGRATEPAGE_SUCCESS) set_page_owner_migrate_reason(newpage, reason); =20 -out: if (rc !=3D -EAGAIN) { /* * A page that has been migrated has all references @@ -1179,20 +1267,7 @@ static int unmap_and_move(new_page_t get_new_page, * we want to retry. */ if (rc =3D=3D MIGRATEPAGE_SUCCESS) { - /* - * Compaction can migrate also non-LRU pages which are - * not accounted to NR_ISOLATED_*. They can be recognized - * as __PageMovable - */ - if (likely(!__PageMovable(page))) - mod_node_page_state(page_pgdat(page), NR_ISOLATED_ANON + - page_is_file_lru(page), -thp_nr_pages(page)); - - if (reason !=3D MR_MEMORY_FAILURE) - /* - * We release the page in page_handle_poison. - */ - put_page(page); + migrate_page_done(page, reason); } else { if (rc !=3D -EAGAIN) list_add_tail(&page->lru, ret); @@ -1405,6 +1480,7 @@ int migrate_pages(struct list_head *from, new_page_t = get_new_page, int pass =3D 0; bool is_thp =3D false; struct page *page; + struct page *newpage =3D NULL; struct page *page2; int rc, nr_subpages; LIST_HEAD(ret_pages); @@ -1493,9 +1569,13 @@ int migrate_pages(struct list_head *from, new_page_t= get_new_page, if (PageHuge(page)) continue; =20 - rc =3D unmap_and_move(get_new_page, put_new_page, - private, page, pass > 2, mode, + rc =3D migrate_page_unmap(get_new_page, put_new_page, private, + page, &newpage, pass > 2, mode, reason, &ret_pages); + if (rc =3D=3D MIGRATEPAGE_UNMAP) + rc =3D migrate_page_move(put_new_page, private, + page, newpage, mode, + reason, &ret_pages); /* * The rules are: * Success: page will be freed --=20 2.35.1