From nobody Sun Sep 14 10:18:09 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2D85BC27C76 for ; Wed, 25 Jan 2023 13:45:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S235657AbjAYNpT (ORCPT ); Wed, 25 Jan 2023 08:45:19 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:40236 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S234997AbjAYNpQ (ORCPT ); Wed, 25 Jan 2023 08:45:16 -0500 Received: from outbound-smtp57.blacknight.com (outbound-smtp57.blacknight.com [46.22.136.241]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 5B0B958642 for ; Wed, 25 Jan 2023 05:44:59 -0800 (PST) Received: from mail.blacknight.com (pemlinmail05.blacknight.ie [81.17.254.26]) by outbound-smtp57.blacknight.com (Postfix) with ESMTPS id D07F3FAB20 for ; Wed, 25 Jan 2023 13:44:57 +0000 (GMT) Received: (qmail 20634 invoked from network); 25 Jan 2023 13:44:57 -0000 Received: from unknown (HELO morpheus.112glenside.lan) (mgorman@techsingularity.net@[84.203.198.246]) by 81.17.254.9 with ESMTPA; 25 Jan 2023 13:44:57 -0000 From: Mel Gorman To: Vlastimil Babka Cc: Andrew Morton , Jiri Slaby , Maxim Levitsky , Michal Hocko , Pedro Falcato , Paolo Bonzini , Chuyi Zhou , Linux-MM , LKML , Mel Gorman Subject: [PATCH 1/4] mm, compaction: Rename compact_control->rescan to finish_pageblock Date: Wed, 25 Jan 2023 13:44:31 +0000 Message-Id: <20230125134434.18017-2-mgorman@techsingularity.net> X-Mailer: git-send-email 2.35.3 In-Reply-To: <20230125134434.18017-1-mgorman@techsingularity.net> References: <20230125134434.18017-1-mgorman@techsingularity.net> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" The rescan field was not well named albeit accurate at the time. Rename the field to finish_pageblock to indicate that the remainder of the pageblock should be scanned regardless of COMPACT_CLUSTER_MAX. The intent is that pageblocks with transient failures get marked for skipping to avoid revisiting the same pageblock. Signed-off-by: Mel Gorman Acked-by: Vlastimil Babka --- mm/compaction.c | 24 ++++++++++++------------ mm/internal.h | 6 +++++- 2 files changed, 17 insertions(+), 13 deletions(-) diff --git a/mm/compaction.c b/mm/compaction.c index ca1603524bbe..c018b0e65720 100644 --- a/mm/compaction.c +++ b/mm/compaction.c @@ -1102,12 +1102,12 @@ isolate_migratepages_block(struct compact_control *= cc, unsigned long low_pfn, =20 /* * Avoid isolating too much unless this block is being - * rescanned (e.g. dirty/writeback pages, parallel allocation) + * fully scanned (e.g. dirty/writeback pages, parallel allocation) * or a lock is contended. For contention, isolate quickly to * potentially remove one source of contention. */ if (cc->nr_migratepages >=3D COMPACT_CLUSTER_MAX && - !cc->rescan && !cc->contended) { + !cc->finish_pageblock && !cc->contended) { ++low_pfn; break; } @@ -1172,14 +1172,14 @@ isolate_migratepages_block(struct compact_control *= cc, unsigned long low_pfn, } =20 /* - * Updated the cached scanner pfn once the pageblock has been scanned + * Update the cached scanner pfn once the pageblock has been scanned. * Pages will either be migrated in which case there is no point * scanning in the near future or migration failed in which case the * failure reason may persist. The block is marked for skipping if * there were no pages isolated in the block or if the block is * rescanned twice in a row. */ - if (low_pfn =3D=3D end_pfn && (!nr_isolated || cc->rescan)) { + if (low_pfn =3D=3D end_pfn && (!nr_isolated || cc->finish_pageblock)) { if (valid_page && !skip_updated) set_pageblock_skip(valid_page); update_cached_migrate(cc, low_pfn); @@ -2374,17 +2374,17 @@ compact_zone(struct compact_control *cc, struct cap= ture_control *capc) unsigned long iteration_start_pfn =3D cc->migrate_pfn; =20 /* - * Avoid multiple rescans which can happen if a page cannot be - * isolated (dirty/writeback in async mode) or if the migrated - * pages are being allocated before the pageblock is cleared. - * The first rescan will capture the entire pageblock for - * migration. If it fails, it'll be marked skip and scanning - * will proceed as normal. + * Avoid multiple rescans of the same pageblock which can + * happen if a page cannot be isolated (dirty/writeback in + * async mode) or if the migrated pages are being allocated + * before the pageblock is cleared. The first rescan will + * capture the entire pageblock for migration. If it fails, + * it'll be marked skip and scanning will proceed as normal. */ - cc->rescan =3D false; + cc->finish_pageblock =3D false; if (pageblock_start_pfn(last_migrated_pfn) =3D=3D pageblock_start_pfn(iteration_start_pfn)) { - cc->rescan =3D true; + cc->finish_pageblock =3D true; } =20 switch (isolate_migratepages(cc)) { diff --git a/mm/internal.h b/mm/internal.h index bcf75a8b032d..21466d0ab22f 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -422,7 +422,11 @@ struct compact_control { bool proactive_compaction; /* kcompactd proactive compaction */ bool whole_zone; /* Whole zone should/has been scanned */ bool contended; /* Signal lock contention */ - bool rescan; /* Rescanning the same pageblock */ + bool finish_pageblock; /* Scan the remainder of a pageblock. Used + * when there are potentially transient + * isolation or migration failures to + * ensure forward progress. + */ bool alloc_contig; /* alloc_contig_range allocation */ }; =20 --=20 2.35.3