From nobody Mon Sep 15 07:39:47 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 527A1C67871 for ; Mon, 16 Jan 2023 06:31:24 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231978AbjAPGbW (ORCPT ); Mon, 16 Jan 2023 01:31:22 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57030 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231933AbjAPGbO (ORCPT ); Mon, 16 Jan 2023 01:31:14 -0500 Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id B462335A0 for ; Sun, 15 Jan 2023 22:31:13 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1673850673; x=1705386673; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=kyA2nwu/iDvX1f1gFBqUt2qOO7JRIRhIbEkDSdv9gqU=; b=L3yCq1UbxZ37t8+u+q9zsJy4r6GZ8Gc2J081eco8RB5in8vAeXgqYHu4 XM0vIXUnyfBN9Of5oAJlN0P43kTpiukgwuJaNjvAcp2+XWnYJb7Sh1+e/ 4cS7vKQyIoFwH6JQLFQ4YLy82jlpuTbIEJkHAlWonqytXfSBkY50JRLg9 QvNC92WURoN0MZWk83FeLgpV16wZkdbH7BHK9Xk5yKGtzzZfxevGQaO6N H7uPYU1BmJe8Y4wAr+l88TbJ5et9QzNeCfInArkpNTuBikE0bpkDDvn5G CVIUuA8RC5hSmeS6oOQOQKol1FY/g9Vgp6QFSljqYqUyQQsyB+Hd2emat g==; X-IronPort-AV: E=McAfee;i="6500,9779,10591"; a="388892124" X-IronPort-AV: E=Sophos;i="5.97,220,1669104000"; d="scan'208";a="388892124" Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Jan 2023 22:31:13 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10591"; a="801286591" X-IronPort-AV: E=Sophos;i="5.97,220,1669104000"; d="scan'208";a="801286591" Received: from tiangeng-mobl.ccr.corp.intel.com (HELO yhuang6-mobl2.ccr.corp.intel.com) ([10.255.28.220]) by fmsmga001-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Jan 2023 22:31:10 -0800 From: Huang Ying To: Andrew Morton Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Huang Ying , Alistair Popple , Zi Yan , Baolin Wang , Yang Shi , Oscar Salvador , Matthew Wilcox , Bharata B Rao , haoxin , Minchan Kim Subject: [PATCH -v3 1/9] migrate_pages: organize stats with struct migrate_pages_stats Date: Mon, 16 Jan 2023 14:30:49 +0800 Message-Id: <20230116063057.653862-2-ying.huang@intel.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20230116063057.653862-1-ying.huang@intel.com> References: <20230116063057.653862-1-ying.huang@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Define struct migrate_pages_stats to organize the various statistics in migrate_pages(). This makes it easier to collect and consume the statistics in multiple functions. This will be needed in the following patches in the series. Signed-off-by: "Huang, Ying" Reviewed-by: Alistair Popple Reviewed-by: Zi Yan Reviewed-by: Baolin Wang Cc: Yang Shi Cc: Oscar Salvador Cc: Matthew Wilcox Cc: Bharata B Rao Cc: haoxin Cc: Minchan Kim --- mm/migrate.c | 60 +++++++++++++++++++++++++++++----------------------- 1 file changed, 34 insertions(+), 26 deletions(-) diff --git a/mm/migrate.c b/mm/migrate.c index a4d3fc65085f..ef388a9e4747 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -1396,6 +1396,16 @@ static inline int try_split_folio(struct folio *foli= o, struct list_head *split_f return rc; } =20 +struct migrate_pages_stats { + int nr_succeeded; /* Normal and large folios migrated successfully, in + units of base pages */ + int nr_failed_pages; /* Normal and large folios failed to be migrated, in + units of base pages. Untried folios aren't counted */ + int nr_thp_succeeded; /* THP migrated successfully */ + int nr_thp_failed; /* THP failed to be migrated */ + int nr_thp_split; /* THP split before migrating */ +}; + /* * migrate_pages - migrate the folios specified in a list, to the free fol= ios * supplied as the target for the page migration @@ -1430,13 +1440,8 @@ int migrate_pages(struct list_head *from, new_page_t= get_new_page, int large_retry =3D 1; int thp_retry =3D 1; int nr_failed =3D 0; - int nr_failed_pages =3D 0; int nr_retry_pages =3D 0; - int nr_succeeded =3D 0; - int nr_thp_succeeded =3D 0; int nr_large_failed =3D 0; - int nr_thp_failed =3D 0; - int nr_thp_split =3D 0; int pass =3D 0; bool is_large =3D false; bool is_thp =3D false; @@ -1446,9 +1451,11 @@ int migrate_pages(struct list_head *from, new_page_t= get_new_page, LIST_HEAD(split_folios); bool nosplit =3D (reason =3D=3D MR_NUMA_MISPLACED); bool no_split_folio_counting =3D false; + struct migrate_pages_stats stats; =20 trace_mm_migrate_pages_start(mode, reason); =20 + memset(&stats, 0, sizeof(stats)); split_folio_migration: for (pass =3D 0; pass < 10 && (retry || large_retry); pass++) { retry =3D 0; @@ -1502,9 +1509,9 @@ int migrate_pages(struct list_head *from, new_page_t = get_new_page, /* Large folio migration is unsupported */ if (is_large) { nr_large_failed++; - nr_thp_failed +=3D is_thp; + stats.nr_thp_failed +=3D is_thp; if (!try_split_folio(folio, &split_folios)) { - nr_thp_split +=3D is_thp; + stats.nr_thp_split +=3D is_thp; break; } /* Hugetlb migration is unsupported */ @@ -1512,7 +1519,7 @@ int migrate_pages(struct list_head *from, new_page_t = get_new_page, nr_failed++; } =20 - nr_failed_pages +=3D nr_pages; + stats.nr_failed_pages +=3D nr_pages; list_move_tail(&folio->lru, &ret_folios); break; case -ENOMEM: @@ -1522,13 +1529,13 @@ int migrate_pages(struct list_head *from, new_page_= t get_new_page, */ if (is_large) { nr_large_failed++; - nr_thp_failed +=3D is_thp; + stats.nr_thp_failed +=3D is_thp; /* Large folio NUMA faulting doesn't split to retry. */ if (!nosplit) { int ret =3D try_split_folio(folio, &split_folios); =20 if (!ret) { - nr_thp_split +=3D is_thp; + stats.nr_thp_split +=3D is_thp; break; } else if (reason =3D=3D MR_LONGTERM_PIN && ret =3D=3D -EAGAIN) { @@ -1546,7 +1553,7 @@ int migrate_pages(struct list_head *from, new_page_t = get_new_page, nr_failed++; } =20 - nr_failed_pages +=3D nr_pages + nr_retry_pages; + stats.nr_failed_pages +=3D nr_pages + nr_retry_pages; /* * There might be some split folios of fail-to-migrate large * folios left in split_folios list. Move them back to migration @@ -1556,7 +1563,7 @@ int migrate_pages(struct list_head *from, new_page_t = get_new_page, list_splice_init(&split_folios, from); /* nr_failed isn't updated for not used */ nr_large_failed +=3D large_retry; - nr_thp_failed +=3D thp_retry; + stats.nr_thp_failed +=3D thp_retry; goto out; case -EAGAIN: if (is_large) { @@ -1568,8 +1575,8 @@ int migrate_pages(struct list_head *from, new_page_t = get_new_page, nr_retry_pages +=3D nr_pages; break; case MIGRATEPAGE_SUCCESS: - nr_succeeded +=3D nr_pages; - nr_thp_succeeded +=3D is_thp; + stats.nr_succeeded +=3D nr_pages; + stats.nr_thp_succeeded +=3D is_thp; break; default: /* @@ -1580,20 +1587,20 @@ int migrate_pages(struct list_head *from, new_page_= t get_new_page, */ if (is_large) { nr_large_failed++; - nr_thp_failed +=3D is_thp; + stats.nr_thp_failed +=3D is_thp; } else if (!no_split_folio_counting) { nr_failed++; } =20 - nr_failed_pages +=3D nr_pages; + stats.nr_failed_pages +=3D nr_pages; break; } } } nr_failed +=3D retry; nr_large_failed +=3D large_retry; - nr_thp_failed +=3D thp_retry; - nr_failed_pages +=3D nr_retry_pages; + stats.nr_thp_failed +=3D thp_retry; + stats.nr_failed_pages +=3D nr_retry_pages; /* * Try to migrate split folios of fail-to-migrate large folios, no * nr_failed counting in this round, since all split folios of a @@ -1626,16 +1633,17 @@ int migrate_pages(struct list_head *from, new_page_= t get_new_page, if (list_empty(from)) rc =3D 0; =20 - count_vm_events(PGMIGRATE_SUCCESS, nr_succeeded); - count_vm_events(PGMIGRATE_FAIL, nr_failed_pages); - count_vm_events(THP_MIGRATION_SUCCESS, nr_thp_succeeded); - count_vm_events(THP_MIGRATION_FAIL, nr_thp_failed); - count_vm_events(THP_MIGRATION_SPLIT, nr_thp_split); - trace_mm_migrate_pages(nr_succeeded, nr_failed_pages, nr_thp_succeeded, - nr_thp_failed, nr_thp_split, mode, reason); + count_vm_events(PGMIGRATE_SUCCESS, stats.nr_succeeded); + count_vm_events(PGMIGRATE_FAIL, stats.nr_failed_pages); + count_vm_events(THP_MIGRATION_SUCCESS, stats.nr_thp_succeeded); + count_vm_events(THP_MIGRATION_FAIL, stats.nr_thp_failed); + count_vm_events(THP_MIGRATION_SPLIT, stats.nr_thp_split); + trace_mm_migrate_pages(stats.nr_succeeded, stats.nr_failed_pages, + stats.nr_thp_succeeded, stats.nr_thp_failed, + stats.nr_thp_split, mode, reason); =20 if (ret_succeeded) - *ret_succeeded =3D nr_succeeded; + *ret_succeeded =3D stats.nr_succeeded; =20 return rc; } --=20 2.35.1