From nobody Fri Feb 13 12:30:10 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 67179E81806 for ; Tue, 26 Sep 2023 00:53:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S233498AbjIZAxm (ORCPT ); Mon, 25 Sep 2023 20:53:42 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57536 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S232582AbjIZAxV (ORCPT ); Mon, 25 Sep 2023 20:53:21 -0400 Received: from szxga03-in.huawei.com (szxga03-in.huawei.com [45.249.212.189]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 16A0010A for ; Mon, 25 Sep 2023 17:53:14 -0700 (PDT) Received: from dggpemm100001.china.huawei.com (unknown [172.30.72.53]) by szxga03-in.huawei.com (SkyGuard) with ESMTP id 4Rvh2z45FGzMlpj; Tue, 26 Sep 2023 08:49:31 +0800 (CST) Received: from localhost.localdomain (10.175.112.125) by dggpemm100001.china.huawei.com (7.185.36.93) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.31; Tue, 26 Sep 2023 08:53:12 +0800 From: Kefeng Wang To: Andrew Morton CC: Mike Rapoport , Matthew Wilcox , David Hildenbrand , , , , Zi Yan , Kefeng Wang Subject: [PATCH -next 9/9] mm: convert page_cpupid_reset_last() to folio_cpupid_reset_last() Date: Tue, 26 Sep 2023 08:52:54 +0800 Message-ID: <20230926005254.2861577-10-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20230926005254.2861577-1-wangkefeng.wang@huawei.com> References: <20230926005254.2861577-1-wangkefeng.wang@huawei.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [10.175.112.125] X-ClientProxiedBy: dggems705-chm.china.huawei.com (10.3.19.182) To dggpemm100001.china.huawei.com (7.185.36.93) X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" It isn't need to fill the default cpupid value for all the struct page, since cpupid is only used for numa balancing, and the pages for numa balancing are all from buddy, page_cpupid_reset_last() is already called by free_pages_prepare() to initialize it, so let's drop the page_cpupid_reset_last() in __init_single_page(), then make page_cpupid_reset_last() to take a folio and rename it to folio_cpupid_reset_last(). Signed-off-by: Kefeng Wang --- include/linux/mm.h | 10 +++++----- mm/mm_init.c | 1 - mm/page_alloc.c | 2 +- 3 files changed, 6 insertions(+), 7 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index a6f4b55bf469..ca66a05eb2ed 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -1692,9 +1692,9 @@ static inline int folio_cpupid_last(struct folio *fol= io) { return folio->_last_cpupid; } -static inline void page_cpupid_reset_last(struct page *page) +static inline void folio_cpupid_reset_last(struct folio *folio) { - page->_last_cpupid =3D -1 & LAST_CPUPID_MASK; + folio->_last_cpupid =3D -1 & LAST_CPUPID_MASK; } #else static inline int folio_cpupid_last(struct folio *folio) @@ -1704,9 +1704,9 @@ static inline int folio_cpupid_last(struct folio *fol= io) =20 extern int folio_cpupid_xchg_last(struct folio *folio, int cpupid); =20 -static inline void page_cpupid_reset_last(struct page *page) +static inline void folio_cpupid_reset_last(struct folio *folio) { - page->flags |=3D LAST_CPUPID_MASK << LAST_CPUPID_PGSHIFT; + folio->flags |=3D LAST_CPUPID_MASK << LAST_CPUPID_PGSHIFT; } #endif /* LAST_CPUPID_NOT_IN_PAGE_FLAGS */ =20 @@ -1769,7 +1769,7 @@ static inline bool cpupid_pid_unset(int cpupid) return true; } =20 -static inline void page_cpupid_reset_last(struct page *page) +static inline void folio_cpupid_reset_last(struct folio *folio) { } =20 diff --git a/mm/mm_init.c b/mm/mm_init.c index 06a72c223bce..74c0dc27fbf1 100644 --- a/mm/mm_init.c +++ b/mm/mm_init.c @@ -563,7 +563,6 @@ void __meminit __init_single_page(struct page *page, un= signed long pfn, set_page_links(page, zone, nid, pfn); init_page_count(page); page_mapcount_reset(page); - page_cpupid_reset_last(page); page_kasan_tag_reset(page); =20 INIT_LIST_HEAD(&page->lru); diff --git a/mm/page_alloc.c b/mm/page_alloc.c index a888b9d57751..852fc78ddb34 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -1126,7 +1126,7 @@ static __always_inline bool free_pages_prepare(struct= page *page, return false; } =20 - page_cpupid_reset_last(page); + folio_cpupid_reset_last(folio); page->flags &=3D ~PAGE_FLAGS_CHECK_AT_PREP; reset_page_owner(page, order); page_table_check_free(page, order); --=20 2.27.0