From nobody Wed Sep 17 01:27:10 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 87C08C46467 for ; Tue, 27 Dec 2022 00:31:11 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S232594AbiL0Aav (ORCPT ); Mon, 26 Dec 2022 19:30:51 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:37554 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232649AbiL0AaB (ORCPT ); Mon, 26 Dec 2022 19:30:01 -0500 Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C29E562D3 for ; Mon, 26 Dec 2022 16:29:51 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1672100991; x=1703636991; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=pgO5oJrN4cdhAhS4PST0kGA04r/Qbf7dMr/JbfQPcJw=; b=dM2AEDPXUMzxcjEhG55nDXHmHfYNacHBdN9UpL+2PH37Ygac2gTs827M qAMjFVOoV6bTqAcuzUkpug/NVhOaSv9o4xoOEdF4tcSUk44o+eAqTdVAD 0oTtfnUphHsYVQM17Re2DB8dh3/+E9mopZlPCoWHKzb3fBjTdb+IoBH02 azsjoucRsf0IytGuPaWDMahWaafYrUHf1cH40yVuPpxYq9YdNzoq0RUxX A6ba/vzpLVf55HSkD3B7/l4XoNPlFURgIEMEAE93iM/iOmCC1IaTOJvGt 2VjABxV46D9HhiBfZWpTwXr3fUXX6aBM2Y7pAPsiVRqT6y/4cUPUJv8GD g==; X-IronPort-AV: E=McAfee;i="6500,9779,10572"; a="322597281" X-IronPort-AV: E=Sophos;i="5.96,277,1665471600"; d="scan'208";a="322597281" Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Dec 2022 16:29:51 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10572"; a="760172244" X-IronPort-AV: E=Sophos;i="5.96,277,1665471600"; d="scan'208";a="760172244" Received: from yyang3-mobl1.ccr.corp.intel.com (HELO yhuang6-mobl2.ccr.corp.intel.com) ([10.254.212.104]) by fmsmga002-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Dec 2022 16:29:47 -0800 From: Huang Ying To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Huang Ying , Zi Yan , Yang Shi , Baolin Wang , Oscar Salvador , Matthew Wilcox , Bharata B Rao , Alistair Popple , haoxin Subject: [PATCH 7/8] migrate_pages: share more code between _unmap and _move Date: Tue, 27 Dec 2022 08:28:58 +0800 Message-Id: <20221227002859.27740-8-ying.huang@intel.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20221227002859.27740-1-ying.huang@intel.com> References: <20221227002859.27740-1-ying.huang@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" This is a code cleanup patch to reduce the duplicated code between the _unmap and _move stages of migrate_pages(). No functionality change is expected. Signed-off-by: "Huang, Ying" Cc: Zi Yan Cc: Yang Shi Cc: Baolin Wang Cc: Oscar Salvador Cc: Matthew Wilcox Cc: Bharata B Rao Cc: Alistair Popple Cc: haoxin --- mm/migrate.c | 208 ++++++++++++++++++++------------------------------- 1 file changed, 82 insertions(+), 126 deletions(-) diff --git a/mm/migrate.c b/mm/migrate.c index 70b987391296..70a40b8fee1f 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -1030,21 +1030,26 @@ static void __migrate_folio_extract(struct folio *d= st, static void migrate_folio_undo_src(struct folio *src, int page_was_mapped, struct anon_vma *anon_vma, + bool locked, struct list_head *ret) { if (page_was_mapped) remove_migration_ptes(src, src, false); if (anon_vma) put_anon_vma(anon_vma); - folio_unlock(src); - list_move_tail(&src->lru, ret); + if (locked) + folio_unlock(src); + if (ret) + list_move_tail(&src->lru, ret); } =20 static void migrate_folio_undo_dst(struct folio *dst, + bool locked, free_page_t put_new_page, unsigned long private) { - folio_unlock(dst); + if (locked) + folio_unlock(dst); if (put_new_page) put_new_page(&dst->page, private); else @@ -1068,14 +1073,44 @@ static void migrate_folio_done(struct folio *src, folio_put(src); } =20 -static int __migrate_folio_unmap(struct folio *src, struct folio *dst, - int force, bool force_lock, enum migrate_mode mode) +/* Obtain the lock on page, remove all ptes. */ +static int migrate_folio_unmap(new_page_t get_new_page, free_page_t put_ne= w_page, + unsigned long private, struct folio *src, + struct folio **dstp, int force, bool force_lock, + enum migrate_mode mode, enum migrate_reason reason, + struct list_head *ret) { - int rc =3D -EAGAIN; + struct folio *dst; + int rc =3D MIGRATEPAGE_UNMAP; + struct page *newpage =3D NULL; int page_was_mapped =3D 0; struct anon_vma *anon_vma =3D NULL; bool is_lru =3D !__PageMovable(&src->page); + bool locked =3D false; + bool dst_locked =3D false; + + if (!thp_migration_supported() && folio_test_transhuge(src)) + return -ENOSYS; + + if (folio_ref_count(src) =3D=3D 1) { + /* Folio was freed from under us. So we are done. */ + folio_clear_active(src); + folio_clear_unevictable(src); + /* free_pages_prepare() will clear PG_isolated. */ + list_del(&src->lru); + migrate_folio_done(src, reason); + return MIGRATEPAGE_SUCCESS; + } + + newpage =3D get_new_page(&src->page, private); + if (!newpage) + return -ENOMEM; + dst =3D page_folio(newpage); + *dstp =3D dst; + + dst->private =3D NULL; =20 + rc =3D -EAGAIN; if (!folio_trylock(src)) { if (!force || mode =3D=3D MIGRATE_ASYNC) goto out; @@ -1103,6 +1138,7 @@ static int __migrate_folio_unmap(struct folio *src, s= truct folio *dst, =20 folio_lock(src); } + locked =3D true; =20 if (folio_test_writeback(src)) { /* @@ -1117,10 +1153,10 @@ static int __migrate_folio_unmap(struct folio *src,= struct folio *dst, break; default: rc =3D -EBUSY; - goto out_unlock; + goto out; } if (!force) - goto out_unlock; + goto out; folio_wait_writeback(src); } =20 @@ -1150,7 +1186,8 @@ static int __migrate_folio_unmap(struct folio *src, s= truct folio *dst, * This is much like races on refcount of oldpage: just don't BUG(). */ if (unlikely(!folio_trylock(dst))) - goto out_unlock; + goto out; + dst_locked =3D true; =20 if (unlikely(!is_lru)) { __migrate_folio_record(dst, page_was_mapped, anon_vma); @@ -1172,7 +1209,7 @@ static int __migrate_folio_unmap(struct folio *src, s= truct folio *dst, if (!src->mapping) { if (folio_test_private(src)) { try_to_free_buffers(src); - goto out_unlock_both; + goto out; } } else if (folio_mapped(src)) { /* Establish migration ptes */ @@ -1187,75 +1224,27 @@ static int __migrate_folio_unmap(struct folio *src,= struct folio *dst, return MIGRATEPAGE_UNMAP; } =20 - - if (page_was_mapped) - remove_migration_ptes(src, src, false); - -out_unlock_both: - folio_unlock(dst); -out_unlock: - /* Drop an anon_vma reference if we took one */ - if (anon_vma) - put_anon_vma(anon_vma); - folio_unlock(src); out: - - return rc; -} - -/* Obtain the lock on page, remove all ptes. */ -static int migrate_folio_unmap(new_page_t get_new_page, free_page_t put_ne= w_page, - unsigned long private, struct folio *src, - struct folio **dstp, int force, bool force_lock, - enum migrate_mode mode, enum migrate_reason reason, - struct list_head *ret) -{ - struct folio *dst; - int rc =3D MIGRATEPAGE_UNMAP; - struct page *newpage =3D NULL; - - if (!thp_migration_supported() && folio_test_transhuge(src)) - return -ENOSYS; - - if (folio_ref_count(src) =3D=3D 1) { - /* Folio was freed from under us. So we are done. */ - folio_clear_active(src); - folio_clear_unevictable(src); - /* free_pages_prepare() will clear PG_isolated. */ - list_del(&src->lru); - migrate_folio_done(src, reason); - return MIGRATEPAGE_SUCCESS; - } - - newpage =3D get_new_page(&src->page, private); - if (!newpage) - return -ENOMEM; - dst =3D page_folio(newpage); - *dstp =3D dst; - - dst->private =3D NULL; - rc =3D __migrate_folio_unmap(src, dst, force, force_lock, mode); - if (rc =3D=3D MIGRATEPAGE_UNMAP) - return rc; - /* * A page that has not been migrated will have kept its * references and be restored. */ /* restore the folio to right list. */ - if (rc !=3D -EAGAIN && rc !=3D -EDEADLOCK) - list_move_tail(&src->lru, ret); + if (rc =3D=3D -EAGAIN || rc =3D=3D -EDEADLOCK) + ret =3D NULL; =20 - if (put_new_page) - put_new_page(&dst->page, private); - else - folio_put(dst); + migrate_folio_undo_src(src, page_was_mapped, anon_vma, locked, ret); + if (dst) + migrate_folio_undo_dst(dst, dst_locked, put_new_page, private); =20 return rc; } =20 -static int __migrate_folio_move(struct folio *src, struct folio *dst, - enum migrate_mode mode) +/* Migrate the folio to the newly allocated folio in dst. */ +static int migrate_folio_move(free_page_t put_new_page, unsigned long priv= ate, + struct folio *src, struct folio *dst, + enum migrate_mode mode, enum migrate_reason reason, + struct list_head *ret) { int rc; int page_was_mapped =3D 0; @@ -1264,9 +1253,10 @@ static int __migrate_folio_move(struct folio *src, s= truct folio *dst, __migrate_folio_extract(dst, &page_was_mapped, &anon_vma); =20 rc =3D move_to_new_folio(dst, src, mode); + if (rc) + goto out; =20 - if (rc !=3D -EAGAIN) - list_del(&dst->lru); + list_del(&dst->lru); /* * When successful, push dst to LRU immediately: so that if it * turns out to be an mlocked page, remove_migration_ptes() will @@ -1276,74 +1266,40 @@ static int __migrate_folio_move(struct folio *src, = struct folio *dst, * unsuccessful, and other cases when a page has been temporarily * isolated from the unevictable LRU: but this case is the easiest. */ - if (rc =3D=3D MIGRATEPAGE_SUCCESS) { - folio_add_lru(dst); - if (page_was_mapped) - lru_add_drain(); - } - - if (rc =3D=3D -EAGAIN) { - __migrate_folio_record(dst, page_was_mapped, anon_vma); - return rc; - } - + folio_add_lru(dst); if (page_was_mapped) - remove_migration_ptes(src, - rc =3D=3D MIGRATEPAGE_SUCCESS ? dst : src, false); + lru_add_drain(); =20 + if (page_was_mapped) + remove_migration_ptes(src, dst, false); folio_unlock(dst); - /* Drop an anon_vma reference if we took one */ - if (anon_vma) - put_anon_vma(anon_vma); - folio_unlock(src); + set_page_owner_migrate_reason(&dst->page, reason); /* * If migration is successful, decrease refcount of dst, * which will not free the page because new page owner increased * refcounter. */ - if (rc =3D=3D MIGRATEPAGE_SUCCESS) - folio_put(dst); - - return rc; -} - -/* Migrate the folio to the newly allocated folio in dst. */ -static int migrate_folio_move(free_page_t put_new_page, unsigned long priv= ate, - struct folio *src, struct folio *dst, - enum migrate_mode mode, enum migrate_reason reason, - struct list_head *ret) -{ - int rc; - - rc =3D __migrate_folio_move(src, dst, mode); - if (rc =3D=3D MIGRATEPAGE_SUCCESS) - set_page_owner_migrate_reason(&dst->page, reason); - - if (rc !=3D -EAGAIN) { - /* - * A folio that has been migrated has all references - * removed and will be freed. A folio that has not been - * migrated will have kept its references and be restored. - */ - list_del(&src->lru); - } + folio_put(dst); =20 /* - * If migration is successful, releases reference grabbed during - * isolation. Otherwise, restore the folio to right list unless - * we want to retry. + * A page that has been migrated has all references removed + * and will be freed. */ - if (rc =3D=3D MIGRATEPAGE_SUCCESS) { - migrate_folio_done(src, reason); - } else if (rc !=3D -EAGAIN) { - list_add_tail(&src->lru, ret); + list_del(&src->lru); + migrate_folio_undo_src(src, 0, anon_vma, true, NULL); + migrate_folio_done(src, reason); =20 - if (put_new_page) - put_new_page(&dst->page, private); - else - folio_put(dst); + return rc; +out: + if (rc =3D=3D -EAGAIN) { + __migrate_folio_record(dst, page_was_mapped, anon_vma); + return rc; } =20 + migrate_folio_undo_src(src, page_was_mapped, anon_vma, true, ret); + list_del(&dst->lru); + migrate_folio_undo_dst(dst, true, put_new_page, private); + return rc; } =20 @@ -1849,9 +1805,9 @@ static int migrate_pages_batch(struct list_head *from= , new_page_t get_new_page, =20 __migrate_folio_extract(dst, &page_was_mapped, &anon_vma); migrate_folio_undo_src(folio, page_was_mapped, anon_vma, - ret_folios); + true, ret_folios); list_del(&dst->lru); - migrate_folio_undo_dst(dst, put_new_page, private); + migrate_folio_undo_dst(dst, true, put_new_page, private); dst =3D dst2; dst2 =3D list_next_entry(dst, lru); } --=20 2.35.1