From nobody Mon Feb 9 09:29:00 2026 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 40FB6C77B7A for ; Wed, 24 May 2023 09:13:57 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S240295AbjEXJN4 (ORCPT ); Wed, 24 May 2023 05:13:56 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41752 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229723AbjEXJNx (ORCPT ); Wed, 24 May 2023 05:13:53 -0400 Received: from SHSQR01.spreadtrum.com (unknown [222.66.158.135]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9353593 for ; Wed, 24 May 2023 02:13:50 -0700 (PDT) Received: from SHSend.spreadtrum.com (bjmbx01.spreadtrum.com [10.0.64.7]) by SHSQR01.spreadtrum.com with ESMTP id 34O9DADV076637; Wed, 24 May 2023 17:13:10 +0800 (+08) (envelope-from zhaoyang.huang@unisoc.com) Received: from bj03382pcu.spreadtrum.com (10.0.73.76) by BJMBX01.spreadtrum.com (10.0.64.7) with Microsoft SMTP Server (TLS) id 15.0.1497.23; Wed, 24 May 2023 17:13:10 +0800 From: "zhaoyang.huang" To: Andrew Morton , Johannes Weiner , Suren Baghdasaryan , , , Zhaoyang Huang , Subject: [PATCH] mm: deduct the number of pages reclaimed by madvise from workingset Date: Wed, 24 May 2023 17:12:54 +0800 Message-ID: <1684919574-28368-1-git-send-email-zhaoyang.huang@unisoc.com> X-Mailer: git-send-email 1.9.1 MIME-Version: 1.0 X-Originating-IP: [10.0.73.76] X-ClientProxiedBy: SHCAS03.spreadtrum.com (10.0.1.207) To BJMBX01.spreadtrum.com (10.0.64.7) X-MAIL: SHSQR01.spreadtrum.com 34O9DADV076637 Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Zhaoyang Huang The pages reclaimed by madvise_pageout are made of inactive and dropped fro= m LRU forcefully, which lead to the coming up refault pages possess a large refau= lt distance than it should be. These could affect the accuracy of thrashing wh= en madvise_pageout is used as a common way of memory reclaiming as ANDROID doe= s now. Signed-off-by: Zhaoyang Huang --- include/linux/swap.h | 2 +- mm/madvise.c | 4 ++-- mm/vmscan.c | 8 +++++++- 3 files changed, 10 insertions(+), 4 deletions(-) diff --git a/include/linux/swap.h b/include/linux/swap.h index 2787b84..0312142 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -428,7 +428,7 @@ extern unsigned long mem_cgroup_shrink_node(struct mem_= cgroup *mem, extern int vm_swappiness; long remove_mapping(struct address_space *mapping, struct folio *folio); =20 -extern unsigned long reclaim_pages(struct list_head *page_list); +extern unsigned long reclaim_pages(struct mm_struct *mm, struct list_head = *page_list); #ifdef CONFIG_NUMA extern int node_reclaim_mode; extern int sysctl_min_unmapped_ratio; diff --git a/mm/madvise.c b/mm/madvise.c index b6ea204..61c8d7b 100644 --- a/mm/madvise.c +++ b/mm/madvise.c @@ -420,7 +420,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd, huge_unlock: spin_unlock(ptl); if (pageout) - reclaim_pages(&page_list); + reclaim_pages(mm, &page_list); return 0; } =20 @@ -516,7 +516,7 @@ static int madvise_cold_or_pageout_pte_range(pmd_t *pmd, arch_leave_lazy_mmu_mode(); pte_unmap_unlock(orig_pte, ptl); if (pageout) - reclaim_pages(&page_list); + reclaim_pages(mm, &page_list); cond_resched(); =20 return 0; diff --git a/mm/vmscan.c b/mm/vmscan.c index 20facec..048c10b 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -2741,12 +2741,14 @@ static unsigned int reclaim_folio_list(struct list_= head *folio_list, return nr_reclaimed; } =20 -unsigned long reclaim_pages(struct list_head *folio_list) +unsigned long reclaim_pages(struct mm_struct *mm, struct list_head *folio_= list) { int nid; unsigned int nr_reclaimed =3D 0; LIST_HEAD(node_folio_list); unsigned int noreclaim_flag; + struct lruvec *lruvec; + struct mem_cgroup *memcg =3D get_mem_cgroup_from_mm(mm); =20 if (list_empty(folio_list)) return nr_reclaimed; @@ -2764,10 +2766,14 @@ unsigned long reclaim_pages(struct list_head *folio= _list) } =20 nr_reclaimed +=3D reclaim_folio_list(&node_folio_list, NODE_DATA(nid)); + lruvec =3D &memcg->nodeinfo[nid]->lruvec; + workingset_age_nonresident(lruvec, -nr_reclaimed); nid =3D folio_nid(lru_to_folio(folio_list)); } while (!list_empty(folio_list)); =20 nr_reclaimed +=3D reclaim_folio_list(&node_folio_list, NODE_DATA(nid)); + lruvec =3D &memcg->nodeinfo[nid]->lruvec; + workingset_age_nonresident(lruvec, -nr_reclaimed); =20 memalloc_noreclaim_restore(noreclaim_flag); =20 --=20 1.9.1