From nobody Wed Sep 10 07:42:13 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9A951C64EC4 for ; Thu, 9 Mar 2023 09:31:39 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230509AbjCIJbg (ORCPT ); Thu, 9 Mar 2023 04:31:36 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41206 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230465AbjCIJba (ORCPT ); Thu, 9 Mar 2023 04:31:30 -0500 Received: from mail-pg1-x54a.google.com (mail-pg1-x54a.google.com [IPv6:2607:f8b0:4864:20::54a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 1803F70431 for ; Thu, 9 Mar 2023 01:31:29 -0800 (PST) Received: by mail-pg1-x54a.google.com with SMTP id v24-20020a631518000000b00502e6bfe335so448670pgl.2 for ; Thu, 09 Mar 2023 01:31:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; t=1678354288; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=o6dgxagOpBb1myYnk/1Y2AGrRWWDZLYTueWoLu8qIY4=; b=KzU6h4ueyt1nc3fHTw62ynzTN6cM5FPoDfMBmAqJ1Wwes61zVej5BOhlYyPncxm4Zm nkjKeBNIQMZU0E9YJO2405czXqReQIP71dpM6JRcIVBwGfbOfnDAO+93r67dHWvV1VcL zONddPvKo2/hWkzUIRnhDO3Zqp+7yh7T+acFffATa66sYzOvTr15BMFihxW3t2a2tMq/ 8Ig75rYkpQRqgddwtZBlA3k3y0eMCsf3xzOYsCI2yvjWMp0avrLAbOSW5a+OH2FI5iL+ ETd/0TFbN+DnBMOE/8xtP1qGx1HOTcBW/q+JbvhFqWJVi18NNllY9Hag5hM4SObpgkS6 mKfA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1678354288; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=o6dgxagOpBb1myYnk/1Y2AGrRWWDZLYTueWoLu8qIY4=; b=a4G+Df4K744rGAe2PFoari5Lec3tlmx32STNjBKAfFCQnhmLSJ2ZSlHBRZlz9jzl5s uwMXGXegceWszsaZgxZPl+h9CvxObPifIQeh0Odu/jt8cu+u8HlrfQ97ucpnqBHEmArY 72yIS0tkAj9IPnkB74C6f+dw8OlnsYMFAGbc3vjRwDdW9xPONSVUt8nW4vs+f6WB4qeR EDuyCRx02DAeFVt2DP5bhSSCW4IOa79qGPTe0mRtC5PLOr9XtCreKlCkPVI0n1UTTrfP 6Sgfsl09dBobqDoVYFqGJSGXNA2Gbonbf1yiIsXKNJncP3TWWnwlbbtiOPklwNsYk4cY 9GFA== X-Gm-Message-State: AO0yUKXS+P0bQ1ob/JKaWVJfdhz1GTakcBi90e6z/VJpf/ZUv+Nep3md 8R+le46cfkr18wGwqTfBnDPL0OMXGFErRDF1 X-Google-Smtp-Source: AK7set82CEBSuqo8FpS3nsNPKqDoN9u1SA11kEpTwZaYFka9KSS8D9zMVtgu+8uzsp90VdIxYuFl/oAa7auTdfM+ X-Received: from yosry.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:2327]) (user=yosryahmed job=sendgmr) by 2002:a63:5508:0:b0:502:fd71:d58c with SMTP id j8-20020a635508000000b00502fd71d58cmr7728137pgb.9.1678354288447; Thu, 09 Mar 2023 01:31:28 -0800 (PST) Date: Thu, 9 Mar 2023 09:31:07 +0000 In-Reply-To: <20230309093109.3039327-1-yosryahmed@google.com> Mime-Version: 1.0 References: <20230309093109.3039327-1-yosryahmed@google.com> X-Mailer: git-send-email 2.40.0.rc0.216.gc4246ad0f0-goog Message-ID: <20230309093109.3039327-2-yosryahmed@google.com> Subject: [PATCH v2 1/3] mm: vmscan: move set_task_reclaim_state() after cgroup_reclaim() From: Yosry Ahmed To: Alexander Viro , "Darrick J. Wong" , Christoph Lameter , David Rientjes , Joonsoo Kim , Vlastimil Babka , Roman Gushchin , Hyeonggon Yoo <42.hyeyoo@gmail.com>, "Matthew Wilcox (Oracle)" , Miaohe Lin , David Hildenbrand , Johannes Weiner , Peter Xu , NeilBrown , Shakeel Butt , Michal Hocko , Yu Zhao Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-xfs@vger.kernel.org, linux-mm@kvack.org, Yosry Ahmed Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" set_task_reclaim_state() is currently defined in mm/vmscan.c above an #ifdef CONFIG_MEMCG block where cgroup_reclaim() is defined. We are about to add some more helpers that operate on reclaim_state, and will need to use cgroup_reclaim(). Move set_task_reclaim_state() after the #ifdef CONFIG_MEMCG block containing the definition of cgroup_reclaim() to keep helpers operating on reclaim_state together. Signed-off-by: Yosry Ahmed --- mm/vmscan.c | 24 ++++++++++++------------ 1 file changed, 12 insertions(+), 12 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 9c1c5e8b24b8..fef7d1c0f82b 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -188,18 +188,6 @@ struct scan_control { */ int vm_swappiness =3D 60; =20 -static void set_task_reclaim_state(struct task_struct *task, - struct reclaim_state *rs) -{ - /* Check for an overwrite */ - WARN_ON_ONCE(rs && task->reclaim_state); - - /* Check for the nulling of an already-nulled member */ - WARN_ON_ONCE(!rs && !task->reclaim_state); - - task->reclaim_state =3D rs; -} - LIST_HEAD(shrinker_list); DECLARE_RWSEM(shrinker_rwsem); =20 @@ -511,6 +499,18 @@ static bool writeback_throttling_sane(struct scan_cont= rol *sc) } #endif =20 +static void set_task_reclaim_state(struct task_struct *task, + struct reclaim_state *rs) +{ + /* Check for an overwrite */ + WARN_ON_ONCE(rs && task->reclaim_state); + + /* Check for the nulling of an already-nulled member */ + WARN_ON_ONCE(!rs && !task->reclaim_state); + + task->reclaim_state =3D rs; +} + static long xchg_nr_deferred(struct shrinker *shrinker, struct shrink_control *sc) { --=20 2.40.0.rc0.216.gc4246ad0f0-goog From nobody Wed Sep 10 07:42:13 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0FDF0C64EC4 for ; Thu, 9 Mar 2023 09:31:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231254AbjCIJbq (ORCPT ); Thu, 9 Mar 2023 04:31:46 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41304 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230514AbjCIJbe (ORCPT ); Thu, 9 Mar 2023 04:31:34 -0500 Received: from mail-pf1-x449.google.com (mail-pf1-x449.google.com [IPv6:2607:f8b0:4864:20::449]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EBFEAB3708 for ; Thu, 9 Mar 2023 01:31:31 -0800 (PST) Received: by mail-pf1-x449.google.com with SMTP id w3-20020aa78583000000b005d244af158eso923767pfn.23 for ; Thu, 09 Mar 2023 01:31:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; t=1678354291; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=h6SnRTgbJt8uctknxsPWdjBgy9HatQBYNy5scoYuFng=; b=BIspgDfgpDn4bkLxtBabsOU9Uhv4DiqHmpsyEYFdYYQOsugC1O3FyPc6aDReE1SUAN 5uV0uFO+sQlWKI/40WV3HZDPsDyksERJrVxwFRis6lKh0WsNGujWmErL0jVg9DFJ7Sxd IuVjwB/VEOvIqca+ISasnf6kTvYtPtyS5xSUuEuv0nl1KwuTEi21GUVG3fYoJYL/P/IC HyKUQOiXj+7FJ83bqgaNXvvsRkza48cZcoJSntHuv6rgRw6VPmYf64OIgEh4BzpyXPFS xbI1nWERtwBhK1R2RwjpugpOVd7LTuQKHDFNUS6dv8MWFgB5hNM2IG1rMMNwqORxP1qi hAOQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1678354291; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=h6SnRTgbJt8uctknxsPWdjBgy9HatQBYNy5scoYuFng=; b=KD0LgF0A6CxGJJpe9pebScUSqMsbhTZjoKxS5J9vr7++TnyqYVRoNX0B8Hc2DjyRBA 1CUPIxPYxtCVTa7E/mEnJi710F0U0Na7P1duCBetHFLH30tUkazH5JJ6H3IDFoDTHXeF cSWPRssHEt8diipF8uQbk/WfPBqspdMn+kyGe25BjpV9Mg2eEf0VofRCsyEkgBqYYvrp FbnWDYJXcULwSdBCS2i+1PO7LijSqAB8ar7aUuG97x1ynKJ0Ta9/Z/uw/FUHAWPRpatv wv7NXwQse4Xo/Dya9KPqb4LJjc/3pvgstT+2kC696W8Y7pzN812V/ljL9AnruhA924Fq Dgpw== X-Gm-Message-State: AO0yUKVhHjsg0LXdiqk5LcSKkPRaMRuxuO7TCSjNsGnASDdl68gi3nch Tb+NTSEZEj2+8Vx6Wy0d/3kh7U/deSN163it X-Google-Smtp-Source: AK7set/xvyeflSxZY0dqmtHjFakw/c6fvI1rhtdCuvGq7DrJIicLLxLjDK/zNls11ytQyWAWIJilVYP0U74iRmT+ X-Received: from yosry.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:2327]) (user=yosryahmed job=sendgmr) by 2002:a17:902:ef8a:b0:19a:9434:af24 with SMTP id iz10-20020a170902ef8a00b0019a9434af24mr8176829plb.10.1678354291238; Thu, 09 Mar 2023 01:31:31 -0800 (PST) Date: Thu, 9 Mar 2023 09:31:08 +0000 In-Reply-To: <20230309093109.3039327-1-yosryahmed@google.com> Mime-Version: 1.0 References: <20230309093109.3039327-1-yosryahmed@google.com> X-Mailer: git-send-email 2.40.0.rc0.216.gc4246ad0f0-goog Message-ID: <20230309093109.3039327-3-yosryahmed@google.com> Subject: [PATCH v2 2/3] mm: vmscan: refactor updating reclaimed pages in reclaim_state From: Yosry Ahmed To: Alexander Viro , "Darrick J. Wong" , Christoph Lameter , David Rientjes , Joonsoo Kim , Vlastimil Babka , Roman Gushchin , Hyeonggon Yoo <42.hyeyoo@gmail.com>, "Matthew Wilcox (Oracle)" , Miaohe Lin , David Hildenbrand , Johannes Weiner , Peter Xu , NeilBrown , Shakeel Butt , Michal Hocko , Yu Zhao Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-xfs@vger.kernel.org, linux-mm@kvack.org, Yosry Ahmed Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" During reclaim, we keep track of pages reclaimed from other means than LRU-based reclaim through scan_control->reclaim_state->reclaimed_slab, which we stash a pointer to in current task_struct. However, we keep track of more than just reclaimed slab pages through this. We also use it for clean file pages dropped through pruned inodes, and xfs buffer pages freed. Rename reclaimed_slab to reclaimed, and add a helper function that wraps updating it through current, so that future changes to this logic are contained within mm/vmscan.c. Signed-off-by: Yosry Ahmed --- fs/inode.c | 3 +-- fs/xfs/xfs_buf.c | 3 +-- include/linux/swap.h | 5 ++++- mm/slab.c | 3 +-- mm/slob.c | 6 ++---- mm/slub.c | 5 ++--- mm/vmscan.c | 36 ++++++++++++++++++++++++++++++------ 7 files changed, 41 insertions(+), 20 deletions(-) diff --git a/fs/inode.c b/fs/inode.c index 4558dc2f1355..e60fcc41faf1 100644 --- a/fs/inode.c +++ b/fs/inode.c @@ -864,8 +864,7 @@ static enum lru_status inode_lru_isolate(struct list_he= ad *item, __count_vm_events(KSWAPD_INODESTEAL, reap); else __count_vm_events(PGINODESTEAL, reap); - if (current->reclaim_state) - current->reclaim_state->reclaimed_slab +=3D reap; + mm_account_reclaimed_pages(reap); } iput(inode); spin_lock(lru_lock); diff --git a/fs/xfs/xfs_buf.c b/fs/xfs/xfs_buf.c index 54c774af6e1c..060079f1e966 100644 --- a/fs/xfs/xfs_buf.c +++ b/fs/xfs/xfs_buf.c @@ -286,8 +286,7 @@ xfs_buf_free_pages( if (bp->b_pages[i]) __free_page(bp->b_pages[i]); } - if (current->reclaim_state) - current->reclaim_state->reclaimed_slab +=3D bp->b_page_count; + report_freed_pages(bp->b_page_count); =20 if (bp->b_pages !=3D bp->b_page_array) kmem_free(bp->b_pages); diff --git a/include/linux/swap.h b/include/linux/swap.h index 209a425739a9..589ea2731931 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -153,13 +153,16 @@ union swap_header { * memory reclaim */ struct reclaim_state { - unsigned long reclaimed_slab; + /* pages reclaimed outside of LRU-based reclaim */ + unsigned long reclaimed; #ifdef CONFIG_LRU_GEN /* per-thread mm walk data */ struct lru_gen_mm_walk *mm_walk; #endif }; =20 +void mm_account_reclaimed_pages(unsigned long pages); + #ifdef __KERNEL__ =20 struct address_space; diff --git a/mm/slab.c b/mm/slab.c index dabc2a671fc6..64bf1de817b2 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -1392,8 +1392,7 @@ static void kmem_freepages(struct kmem_cache *cachep,= struct slab *slab) smp_wmb(); __folio_clear_slab(folio); =20 - if (current->reclaim_state) - current->reclaim_state->reclaimed_slab +=3D 1 << order; + mm_account_reclaimed_pages(1 << order); unaccount_slab(slab, order, cachep); __free_pages(&folio->page, order); } diff --git a/mm/slob.c b/mm/slob.c index fe567fcfa3a3..79cc8680c973 100644 --- a/mm/slob.c +++ b/mm/slob.c @@ -61,7 +61,7 @@ #include =20 #include -#include /* struct reclaim_state */ +#include /* mm_account_reclaimed_pages() */ #include #include #include @@ -211,9 +211,7 @@ static void slob_free_pages(void *b, int order) { struct page *sp =3D virt_to_page(b); =20 - if (current->reclaim_state) - current->reclaim_state->reclaimed_slab +=3D 1 << order; - + mm_account_reclaimed_pages(1 << order); mod_node_page_state(page_pgdat(sp), NR_SLAB_UNRECLAIMABLE_B, -(PAGE_SIZE << order)); __free_pages(sp, order); diff --git a/mm/slub.c b/mm/slub.c index 39327e98fce3..7aa30eef8235 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -11,7 +11,7 @@ */ =20 #include -#include /* struct reclaim_state */ +#include /* mm_account_reclaimed_pages() */ #include #include #include @@ -2063,8 +2063,7 @@ static void __free_slab(struct kmem_cache *s, struct = slab *slab) /* Make the mapping reset visible before clearing the flag */ smp_wmb(); __folio_clear_slab(folio); - if (current->reclaim_state) - current->reclaim_state->reclaimed_slab +=3D pages; + mm_account_reclaimed_pages(pages); unaccount_slab(slab, order, s); __free_pages(&folio->page, order); } diff --git a/mm/vmscan.c b/mm/vmscan.c index fef7d1c0f82b..a3e38851b34a 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -511,6 +511,34 @@ static void set_task_reclaim_state(struct task_struct = *task, task->reclaim_state =3D rs; } =20 +/* + * mm_account_reclaimed_pages(): account reclaimed pages outside of LRU-ba= sed + * reclaim + * @pages: number of pages reclaimed + * + * If the current process is undergoing a reclaim operation, increment the + * number of reclaimed pages by @pages. + */ +void mm_account_reclaimed_pages(unsigned long pages) +{ + if (current->reclaim_state) + current->reclaim_state->reclaimed +=3D pages; +} +EXPORT_SYMBOL(mm_account_reclaimed_pages); + +/* + * flush_reclaim_state(): add pages reclaimed outside of LRU-based reclaim= to + * scan_control->nr_reclaimed. + */ +static void flush_reclaim_state(struct scan_control *sc, + struct reclaim_state *rs) +{ + if (rs) { + sc->nr_reclaimed +=3D rs->reclaimed; + rs->reclaimed =3D 0; + } +} + static long xchg_nr_deferred(struct shrinker *shrinker, struct shrink_control *sc) { @@ -5346,8 +5374,7 @@ static int shrink_one(struct lruvec *lruvec, struct s= can_control *sc) vmpressure(sc->gfp_mask, memcg, false, sc->nr_scanned - scanned, sc->nr_reclaimed - reclaimed); =20 - sc->nr_reclaimed +=3D current->reclaim_state->reclaimed_slab; - current->reclaim_state->reclaimed_slab =3D 0; + flush_reclaim_state(sc, current->reclaim_state); =20 return success ? MEMCG_LRU_YOUNG : 0; } @@ -6472,10 +6499,7 @@ static void shrink_node(pg_data_t *pgdat, struct sca= n_control *sc) =20 shrink_node_memcgs(pgdat, sc); =20 - if (reclaim_state) { - sc->nr_reclaimed +=3D reclaim_state->reclaimed_slab; - reclaim_state->reclaimed_slab =3D 0; - } + flush_reclaim_state(sc, reclaim_state); =20 /* Record the subtree's reclaim efficiency */ if (!sc->proactive) --=20 2.40.0.rc0.216.gc4246ad0f0-goog From nobody Wed Sep 10 07:42:13 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5300CC64EC4 for ; Thu, 9 Mar 2023 09:32:14 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230251AbjCIJcL (ORCPT ); Thu, 9 Mar 2023 04:32:11 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:41312 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230508AbjCIJbh (ORCPT ); Thu, 9 Mar 2023 04:31:37 -0500 Received: from mail-pl1-x64a.google.com (mail-pl1-x64a.google.com [IPv6:2607:f8b0:4864:20::64a]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id C48986780C for ; Thu, 9 Mar 2023 01:31:35 -0800 (PST) Received: by mail-pl1-x64a.google.com with SMTP id l10-20020a17090270ca00b0019caa6e6bd1so828799plt.2 for ; Thu, 09 Mar 2023 01:31:35 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20210112; t=1678354295; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:from:to:cc:subject:date:message-id:reply-to; bh=LEwBMGSn5GpbGlanBqXFsfdHu3Z53NaeJ1a3nMwnQiU=; b=Ze5OIdzu8ujFo9cbq83YqFk9GJOpYr3a3BQKfOSfr4rWI6Yo1wqOYG0VZvkRQl/DgU ONZDb780V2AJLPqQWE3i1IZDOj2DrlnYx2p9SESApSRJk73I5fI+TzBG7cY4phGhOfvE fsxBi+Mbo+Sti9UuMrxl9QnG0QjPUVEa7+s1cz+Q3wYagCiAWB2+bxT6WbOhwW63YCyL vx4QV8MkTkeQi+EJ1ehIcECB6ITq43CvVn8IyyqwmqR4ySJXDiat0u00fzVgzef7Kjx0 9ui6RevSQ/etQzip+tetVSB6hNPcbI8D0n9SoMlj9xGPz0n+yHL+qWibozRsuDEXX+zi /5CQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; t=1678354295; h=cc:to:from:subject:message-id:references:mime-version:in-reply-to :date:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=LEwBMGSn5GpbGlanBqXFsfdHu3Z53NaeJ1a3nMwnQiU=; b=EXFxx9sNTUNRHlMtppguALp8vEr+p728IqwXd9VHszICB6UIRxFvxcHMyyHNa6hFwY JK50yR6DaSpyPIeLybEhbiSaWgvmGZxLmvZO4+lHwvqLByE+Rq61yobZsOKgj/DbvoKv 3tU0qYriNwTLAZlMenSaI+2FApdor3BgLvKApckNdX/kmBp6k28cxmi+F+12hMhcisZO H9BU/7mmVdQgNA5Wfs+A+IgGMv3WIHGxsWBwkP5Q9877odAvk5s04w5n1k/yqD+yx7rT IIh/TiWKFoOWPkDv/DHVU1Yxr/Zzan5FNgWhf0DucEhjYKTiYnzOGPz4ZkhqtCkHP6EC dpUg== X-Gm-Message-State: AO0yUKVnfLBVsfNz55tdIStrKV+mmCKlO+JAzA4Vkh4+JInplC5zUnU0 hTIwVLlsLUENCiqq/HWFUaQvqMQdxXSpcQvF X-Google-Smtp-Source: AK7set/0zSElUk4SV6Jx9e5eK8m600vzb0RCF1MUNHWJepBbU4SjCTMuv6lVZNxTeTFxx/o7bLiMKSNeOfppeLHP X-Received: from yosry.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:2327]) (user=yosryahmed job=sendgmr) by 2002:a63:2953:0:b0:502:ecb9:4f23 with SMTP id bu19-20020a632953000000b00502ecb94f23mr7361890pgb.5.1678354295295; Thu, 09 Mar 2023 01:31:35 -0800 (PST) Date: Thu, 9 Mar 2023 09:31:09 +0000 In-Reply-To: <20230309093109.3039327-1-yosryahmed@google.com> Mime-Version: 1.0 References: <20230309093109.3039327-1-yosryahmed@google.com> X-Mailer: git-send-email 2.40.0.rc0.216.gc4246ad0f0-goog Message-ID: <20230309093109.3039327-4-yosryahmed@google.com> Subject: [PATCH v2 3/3] mm: vmscan: ignore non-LRU-based reclaim in memcg reclaim From: Yosry Ahmed To: Alexander Viro , "Darrick J. Wong" , Christoph Lameter , David Rientjes , Joonsoo Kim , Vlastimil Babka , Roman Gushchin , Hyeonggon Yoo <42.hyeyoo@gmail.com>, "Matthew Wilcox (Oracle)" , Miaohe Lin , David Hildenbrand , Johannes Weiner , Peter Xu , NeilBrown , Shakeel Butt , Michal Hocko , Yu Zhao Cc: linux-fsdevel@vger.kernel.org, linux-kernel@vger.kernel.org, linux-xfs@vger.kernel.org, linux-mm@kvack.org, Yosry Ahmed Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" We keep track of different types of reclaimed pages through reclaim_state->reclaimed, and we add them to the reported number of reclaimed pages. For non-memcg reclaim, this makes sense. For memcg reclaim, we have no clue if those pages are charged to the memcg under reclaim. Slab pages are shared by different memcgs, so a freed slab page may have only been partially charged to the memcg under reclaim. The same goes for clean file pages from pruned inodes (on highmem systems) or xfs buffer pages, there is no simple way to currently link them to the memcg under reclaim. Stop reporting those freed pages as reclaimed pages during memcg reclaim. This should make the return value of writing to memory.reclaim, and may help reduce unnecessary reclaim retries during memcg charging. Generally, this should make the return value of try_to_free_mem_cgroup_pages() more accurate. In some limited cases (e.g. freed a slab page that was mostly charged to the memcg under reclaim), the return value of try_to_free_mem_cgroup_pages() can be underestimated, but this should be fine. The freed pages will be uncharged anyway, and we can charge the memcg the next time around as we usually do memcg reclaim in a retry loop. Signed-off-by: Yosry Ahmed --- mm/vmscan.c | 30 +++++++++++++++++++++++++++++- 1 file changed, 29 insertions(+), 1 deletion(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index a3e38851b34a..bf9d8e175e92 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -533,7 +533,35 @@ EXPORT_SYMBOL(mm_account_reclaimed_pages); static void flush_reclaim_state(struct scan_control *sc, struct reclaim_state *rs) { - if (rs) { + /* + * Currently, reclaim_state->reclaimed includes three types of pages + * freed outside of vmscan: + * (1) Slab pages. + * (2) Clean file pages from pruned inodes. + * (3) XFS freed buffer pages. + * + * For all of these cases, we have no way of finding out whether these + * pages were related to the memcg under reclaim. For example, a freed + * slab page could have had only a single object charged to the memcg + * under reclaim. Also, populated inodes are not on shrinker LRUs + * anymore except on highmem systems. + * + * Instead of over-reporting the reclaimed pages in a memcg reclaim, + * only count such pages in system-wide reclaim. This prevents + * unnecessary retries during memcg charging and false positive from + * proactive reclaim (memory.reclaim). + * + * For uncommon cases were the freed pages were actually significantly + * charged to the memcg under reclaim, and we end up under-reporting, it + * should be fine. The freed pages will be uncharged anyway, even if + * they are not reported properly, and we will be able to make forward + * progress in charging (which is usually in a retry loop). + * + * We can go one step further, and report the uncharged objcg pages in + * memcg reclaim, to make reporting more accurate and reduce + * under-reporting, but it's probably not worth the complexity for now. + */ + if (rs && !cgroup_reclaim(sc)) { sc->nr_reclaimed +=3D rs->reclaimed; rs->reclaimed =3D 0; } --=20 2.40.0.rc0.216.gc4246ad0f0-goog