From nobody Wed Dec 17 21:12:31 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9B331C77B72 for ; Mon, 17 Apr 2023 11:35:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229870AbjDQLfC (ORCPT ); Mon, 17 Apr 2023 07:35:02 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:53990 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230202AbjDQLdA (ORCPT ); Mon, 17 Apr 2023 07:33:00 -0400 Received: from szxga08-in.huawei.com (szxga08-in.huawei.com [45.249.212.255]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4183C93F2 for ; Mon, 17 Apr 2023 04:31:58 -0700 (PDT) Received: from dggpemm500001.china.huawei.com (unknown [7.185.36.107]) by szxga08-in.huawei.com (SkyGuard) with ESMTP id 4Q0PrT2nG6z17SgW; Mon, 17 Apr 2023 19:26:17 +0800 (CST) Received: from localhost.localdomain.localdomain (10.175.113.25) by dggpemm500001.china.huawei.com (7.185.36.107) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.23; Mon, 17 Apr 2023 19:29:57 +0800 From: Kefeng Wang To: Andrew Morton , , "Matthew Wilcox (Oracle)" CC: SeongJae Park , Hugh Dickins , , Kefeng Wang Subject: [PATCH] mm: correct arg in reclaim_pages()/reclaim_clean_pages_from_list() Date: Mon, 17 Apr 2023 19:48:07 +0800 Message-ID: <20230417114807.186786-1-wangkefeng.wang@huawei.com> X-Mailer: git-send-email 2.35.3 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [10.175.113.25] X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To dggpemm500001.china.huawei.com (7.185.36.107) Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Both of them change the arg from page_list to folio_list when convert them to use a folio, but not the declaration, let's correct it, also move the reclaim_pages() from swap.h to internal.h as it only used in mm. Signed-off-by: Kefeng Wang Reviewed-by: David Hildenbrand Reviewed-by: SeongJae Park --- include/linux/swap.h | 1 - mm/internal.h | 3 ++- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/include/linux/swap.h b/include/linux/swap.h index 7f7d5b9ddf7e..3c69cb653cb9 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -442,7 +442,6 @@ extern unsigned long shrink_all_memory(unsigned long nr= _pages); extern int vm_swappiness; long remove_mapping(struct address_space *mapping, struct folio *folio); =20 -extern unsigned long reclaim_pages(struct list_head *page_list); #ifdef CONFIG_NUMA extern int node_reclaim_mode; extern int sysctl_min_unmapped_ratio; diff --git a/mm/internal.h b/mm/internal.h index 286f97a5125c..f59093b8adb6 100644 --- a/mm/internal.h +++ b/mm/internal.h @@ -804,8 +804,9 @@ extern unsigned long __must_check vm_mmap_pgoff(struct= file *, unsigned long, unsigned long, unsigned long); =20 extern void set_pageblock_order(void); +unsigned long reclaim_pages(struct list_head *folio_list); unsigned int reclaim_clean_pages_from_list(struct zone *zone, - struct list_head *page_list); + struct list_head *folio_list); /* The ALLOC_WMARK bits are used as an index to zone->watermark */ #define ALLOC_WMARK_MIN WMARK_MIN #define ALLOC_WMARK_LOW WMARK_LOW --=20 2.35.3