From nobody Tue Apr 7 22:01:45 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4B3873CEBBD for ; Wed, 11 Mar 2026 12:09:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773230985; cv=none; b=HlWUAK5CGT5kk3YwX49QqqtadUf3NVk+38uBjO7SYZ1Npvy65lm7Sc6zZSQo/KLXzDYZR5jpDB6SLJLlgShX8xDwh6eWtqe9ISLXG+5l2ih2Pui5BQ4mLasWkhZsh22VJ/wFh77UiUiOoMukmS0m5n+rBJ173STmZMTWTD4F25w= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773230985; c=relaxed/simple; bh=uDUuKYE7WPm2ghmNppN87QCMZ2Ntaheuuv+6690ZBXc=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=F4ikVAsujqnoK0+iBF/0a1uG0+2pnGvuHtvlcfz4Njaxgg/2B8ven6aBTVDjwBZvReCdwhc4fsC/5xkjIf5T24cwrPNm3TLl3+x74FqoiUa0xPGBnljF0L9O2M9z4lkqr4Hj/uBLpF3bInnpQ7u9cohKbpyl2PBU8x1PEKY7pWY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=JePPqiHZ; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="JePPqiHZ" Received: by smtp.kernel.org (Postfix) with ESMTPS id F084FC2BC9E; Wed, 11 Mar 2026 12:09:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773230985; bh=uDUuKYE7WPm2ghmNppN87QCMZ2Ntaheuuv+6690ZBXc=; h=From:Date:Subject:References:In-Reply-To:To:Cc:Reply-To:From; b=JePPqiHZanxzWwjAQCps+2qdNnfHKE/YXdvtRJmUnbrIWrX49OUJwT5a5rgQQO0cr FGtBklj/F1uhndB3vvLYTGX9nmwCNnM81OEeiL2WDdLki9xTHx3tS8mMQ5IGc+QftT QOerwzFr4JMuUnDcGyTMbiwuhgRCvSdqrdqFTpzqmtz+AJM7ITdcRzICS6cpy6LVWu 2tOAcYXvYxACAymb0KI5r3CCpxRT3KJrS3JGyZEixyOeA2SB/hgllk3Y2Jl2MGcR79 OjYkrDFKD+du5fsbn2D7Zl0WaBIDvu4lnjN/QhXeMiSGTAZOgAKIgUstdol2DkOi5K vJAlM7I9bhOHw== Received: from aws-us-west-2-korg-lkml-1.web.codeaurora.org (localhost.localdomain [127.0.0.1]) by smtp.lore.kernel.org (Postfix) with ESMTP id E0A5C106286D; Wed, 11 Mar 2026 12:09:44 +0000 (UTC) From: Leno Hou via B4 Relay Date: Wed, 11 Mar 2026 20:09:42 +0800 Subject: [PATCH v2 1/2] mm/mglru: fix cgroup OOM during MGLRU state switching Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260311-b4-switch-mglru-v2-v2-1-080cb9321463@gmail.com> References: <20260311-b4-switch-mglru-v2-v2-0-080cb9321463@gmail.com> In-Reply-To: <20260311-b4-switch-mglru-v2-v2-0-080cb9321463@gmail.com> To: Andrew Morton , Axel Rasmussen , Yuanchu Xie , Wei Xu , Jialing Wang , Yafang Shao , Yu Zhao , Kairui Song , Bingfang Guo , Barry Song Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Leno Hou X-Mailer: b4 0.14.3 X-Developer-Signature: v=1; a=ed25519-sha256; t=1773230982; l=9504; i=lenohou@gmail.com; s=20260311; h=from:subject:message-id; bh=tFWHXkGhpgsi30y22v+vy7TlYveQq5dfwpqWnM8x6p4=; b=Rgna2s5Of0mtAYeppCOfpRVCYSTggsfNLOAXPDZBgmHH5PiJH3V8UnWaEwe5CKCAfj3rzSTs4 EJfPo9cDfmIBUxe4S8jTETCrtXs1iUMzqFPFBc4DlQpBzKol71Q3XLB X-Developer-Key: i=lenohou@gmail.com; a=ed25519; pk=8AVHXYurzu1kOGjk9rwvxovwSCynBkv2QAcOvSIe1rw= X-Endpoint-Received: by B4 Relay for lenohou@gmail.com/20260311 with auth_id=674 X-Original-From: Leno Hou Reply-To: lenohou@gmail.com From: Leno Hou When the Multi-Gen LRU (MGLRU) state is toggled dynamically, a race condition exists between the state switching and the memory reclaim path. This can lead to unexpected cgroup OOM kills, even when plenty of reclaimable memory is available. Problem Description =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D The issue arises from a "reclaim vacuum" during the transition. 1. When disabling MGLRU, lru_gen_change_state() sets lrugen->enabled to false before the pages are drained from MGLRU lists back to traditional LRU lists. 2. Concurrent reclaimers in shrink_lruvec() see lrugen->enabled as false and skip the MGLRU path. 3. However, these pages might not have reached the traditional LRU lists yet, or the changes are not yet visible to all CPUs due to a lack of synchronization. 4. get_scan_count() subsequently finds traditional LRU lists empty, concludes there is no reclaimable memory, and triggers an OOM kill. A similar race can occur during enablement, where the reclaimer sees the new state but the MGLRU lists haven't been populated via fill_evictable() yet. Solution =3D=3D=3D=3D=3D=3D=3D Introduce a 'draining' state (`lru_drain_core`) to bridge the transition. When transitioning, the system enters this intermediate state where the reclaimer is forced to attempt both MGLRU and traditional reclaim paths sequentially. This ensures that folios remain visible to at least one reclaim mechanism until the transition is fully materialized across all CPUs. Changes =3D=3D=3D=3D=3D=3D=3D - Adds a static branch `lru_drain_core` to track the transition state. - Updates shrink_lruvec(), shrink_node(), and kswapd_age_node() to allow a "joint reclaim" period during the transition. - Ensures all LRU helpers correctly identify page state by checking folio_lru_gen(folio) !=3D -1 instead of relying solely on global flags. This effectively eliminates the race window that previously triggered OOMs under high memory pressure. The issue was consistently reproduced on v6.1.157 and v6.18.3 using a high-pressure memory cgroup (v1) environment. To: Andrew Morton To: Axel Rasmussen To: Yuanchu Xie To: Wei Xu To: Barry Song <21cnbao@gmail.com> To: Jialing Wang To: Yafang Shao To: Yu Zhao To: Kairui Song To: Bingfang Guo Cc: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Leno Hou --- include/linux/mm_inline.h | 5 +++++ mm/rmap.c | 2 +- mm/swap.c | 14 ++++++++------ mm/vmscan.c | 49 ++++++++++++++++++++++++++++++++++++++-----= ---- 4 files changed, 54 insertions(+), 16 deletions(-) diff --git a/include/linux/mm_inline.h b/include/linux/mm_inline.h index fa2d6ba811b5..e6443e22bf67 100644 --- a/include/linux/mm_inline.h +++ b/include/linux/mm_inline.h @@ -321,6 +321,11 @@ static inline bool lru_gen_in_fault(void) return false; } =20 +static inline int folio_lru_gen(const struct folio *folio) +{ + return -1; +} + static inline bool lru_gen_add_folio(struct lruvec *lruvec, struct folio *= folio, bool reclaiming) { return false; diff --git a/mm/rmap.c b/mm/rmap.c index 0f00570d1b9e..488bcdca65ed 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -958,7 +958,7 @@ static bool folio_referenced_one(struct folio *folio, return false; } =20 - if (lru_gen_enabled() && pvmw.pte) { + if ((folio_lru_gen(folio) !=3D -1) && pvmw.pte) { if (lru_gen_look_around(&pvmw)) referenced++; } else if (pvmw.pte) { diff --git a/mm/swap.c b/mm/swap.c index bb19ccbece46..a2397b44710a 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -456,7 +456,7 @@ void folio_mark_accessed(struct folio *folio) { if (folio_test_dropbehind(folio)) return; - if (lru_gen_enabled()) { + if (folio_lru_gen(folio) !=3D -1) { lru_gen_inc_refs(folio); return; } @@ -553,7 +553,7 @@ void folio_add_lru_vma(struct folio *folio, struct vm_a= rea_struct *vma) */ static void lru_deactivate_file(struct lruvec *lruvec, struct folio *folio) { - bool active =3D folio_test_active(folio) || lru_gen_enabled(); + bool active =3D folio_test_active(folio) || (folio_lru_gen(folio) !=3D -1= ); long nr_pages =3D folio_nr_pages(folio); =20 if (folio_test_unevictable(folio)) @@ -596,7 +596,9 @@ static void lru_deactivate(struct lruvec *lruvec, struc= t folio *folio) { long nr_pages =3D folio_nr_pages(folio); =20 - if (folio_test_unevictable(folio) || !(folio_test_active(folio) || lru_ge= n_enabled())) + if (folio_test_unevictable(folio) || + !(folio_test_active(folio) || + (folio_lru_gen(folio) !=3D -1))) return; =20 lruvec_del_folio(lruvec, folio); @@ -618,7 +620,7 @@ static void lru_lazyfree(struct lruvec *lruvec, struct = folio *folio) =20 lruvec_del_folio(lruvec, folio); folio_clear_active(folio); - if (lru_gen_enabled()) + if (folio_lru_gen(folio) !=3D -1) lru_gen_clear_refs(folio); else folio_clear_referenced(folio); @@ -689,7 +691,7 @@ void deactivate_file_folio(struct folio *folio) if (folio_test_unevictable(folio) || !folio_test_lru(folio)) return; =20 - if (lru_gen_enabled() && lru_gen_clear_refs(folio)) + if ((folio_lru_gen(folio) !=3D -1) && lru_gen_clear_refs(folio)) return; =20 folio_batch_add_and_move(folio, lru_deactivate_file); @@ -708,7 +710,7 @@ void folio_deactivate(struct folio *folio) if (folio_test_unevictable(folio) || !folio_test_lru(folio)) return; =20 - if (lru_gen_enabled() ? lru_gen_clear_refs(folio) : !folio_test_active(fo= lio)) + if ((folio_lru_gen(folio) !=3D -1) ? lru_gen_clear_refs(folio) : !folio_t= est_active(folio)) return; =20 folio_batch_add_and_move(folio, lru_deactivate); diff --git a/mm/vmscan.c b/mm/vmscan.c index 0fc9373e8251..38d38edda471 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -873,11 +873,23 @@ static bool lru_gen_set_refs(struct folio *folio) set_mask_bits(&folio->flags.f, LRU_REFS_FLAGS, BIT(PG_workingset)); return true; } + +DEFINE_STATIC_KEY_FALSE(lru_drain_core); +static inline bool lru_gen_draining(void) +{ + return static_branch_unlikely(&lru_drain_core); +} + #else static bool lru_gen_set_refs(struct folio *folio) { return false; } +static inline bool lru_gen_draining(void) +{ + return false; +} + #endif /* CONFIG_LRU_GEN */ =20 static enum folio_references folio_check_references(struct folio *folio, @@ -905,7 +917,7 @@ static enum folio_references folio_check_references(str= uct folio *folio, if (referenced_ptes =3D=3D -1) return FOLIOREF_KEEP; =20 - if (lru_gen_enabled()) { + if (folio_lru_gen(folio) !=3D -1) { if (!referenced_ptes) return FOLIOREF_RECLAIM; =20 @@ -2319,7 +2331,7 @@ static void prepare_scan_control(pg_data_t *pgdat, st= ruct scan_control *sc) unsigned long file; struct lruvec *target_lruvec; =20 - if (lru_gen_enabled()) + if (lru_gen_enabled() && !lru_gen_draining()) return; =20 target_lruvec =3D mem_cgroup_lruvec(sc->target_mem_cgroup, pgdat); @@ -5178,6 +5190,8 @@ static void lru_gen_change_state(bool enabled) if (enabled =3D=3D lru_gen_enabled()) goto unlock; =20 + static_branch_enable_cpuslocked(&lru_drain_core); + if (enabled) static_branch_enable_cpuslocked(&lru_gen_caps[LRU_GEN_CORE]); else @@ -5208,6 +5222,9 @@ static void lru_gen_change_state(bool enabled) =20 cond_resched(); } while ((memcg =3D mem_cgroup_iter(NULL, memcg, NULL))); + + static_branch_disable_cpuslocked(&lru_drain_core); + unlock: mutex_unlock(&state_mutex); put_online_mems(); @@ -5780,9 +5797,12 @@ static void shrink_lruvec(struct lruvec *lruvec, str= uct scan_control *sc) bool proportional_reclaim; struct blk_plug plug; =20 - if (lru_gen_enabled() && !root_reclaim(sc)) { + if ((lru_gen_enabled() || lru_gen_draining()) && !root_reclaim(sc)) { lru_gen_shrink_lruvec(lruvec, sc); - return; + + if (!lru_gen_draining()) + return; + } =20 get_scan_count(lruvec, sc, nr); @@ -6041,11 +6061,17 @@ static void shrink_node(pg_data_t *pgdat, struct sc= an_control *sc) unsigned long nr_reclaimed, nr_scanned, nr_node_reclaimed; struct lruvec *target_lruvec; bool reclaimable =3D false; + s8 priority =3D sc->priority; =20 - if (lru_gen_enabled() && root_reclaim(sc)) { + if ((lru_gen_enabled() || lru_gen_draining()) && root_reclaim(sc)) { memset(&sc->nr, 0, sizeof(sc->nr)); lru_gen_shrink_node(pgdat, sc); - return; + + if (!lru_gen_draining()) + return; + + sc->priority =3D priority; + } =20 target_lruvec =3D mem_cgroup_lruvec(sc->target_mem_cgroup, pgdat); @@ -6315,7 +6341,7 @@ static void snapshot_refaults(struct mem_cgroup *targ= et_memcg, pg_data_t *pgdat) struct lruvec *target_lruvec; unsigned long refaults; =20 - if (lru_gen_enabled()) + if (lru_gen_enabled() && !lru_gen_draining()) return; =20 target_lruvec =3D mem_cgroup_lruvec(target_memcg, pgdat); @@ -6703,10 +6729,15 @@ static void kswapd_age_node(struct pglist_data *pgd= at, struct scan_control *sc) { struct mem_cgroup *memcg; struct lruvec *lruvec; + s8 priority =3D sc->priority; =20 - if (lru_gen_enabled()) { + if (lru_gen_enabled() || lru_gen_draining()) { lru_gen_age_node(pgdat, sc); - return; + + if (!lru_gen_draining()) + return; + + sc->priority =3D priority; } =20 lruvec =3D mem_cgroup_lruvec(NULL, pgdat); --=20 2.52.0 From nobody Tue Apr 7 22:01:45 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4B2E93C6A45 for ; Wed, 11 Mar 2026 12:09:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773230985; cv=none; b=eGCt/07cwPVzTKuEEjlSzkfTQWV2hjxrqgDUUVAXim7hskH/I1FwemBQcftK+DwalKlcCuBBezd/kAIqTDncQXLM5h1h6w7HKeLJZTW/xmxSO0jOIgCfaZReHkenJfVTPAyVsVgVPlUx4eRYN3vwx0inFLmi/kxY2sEbhvgLi6M= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773230985; c=relaxed/simple; bh=rC4pO0IWDnF30bVsp69mvTMcZsws+nEtxbxlusOjQe0=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=dNSeQ40fMqLm7z2XGbtyG/hbDtOH6iMARlDWA3EhXF1M3hpRay2QTNTTktOcl/SLra/rsfPGf4+2F9VlEt+8EAiElEZIYKOzRHXYYSdM+iyqaLI2JVREf6nyT3ltWeEtLLnrB4sC2W17+fWtoIxelVIPLI01MGIKiaNS0JiXh5I= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=UYRl4yvJ; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="UYRl4yvJ" Received: by smtp.kernel.org (Postfix) with ESMTPS id 0F8F7C116C6; Wed, 11 Mar 2026 12:09:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1773230985; bh=rC4pO0IWDnF30bVsp69mvTMcZsws+nEtxbxlusOjQe0=; h=From:Date:Subject:References:In-Reply-To:To:Cc:Reply-To:From; b=UYRl4yvJ1o4wr+mqNseUyHMbQOZURSNHbP4V4dIZl3aZ0sIr866pecEB7LC+GkH66 P7fmi3kYTdCBkV9apJ+2jzqAEoZ0jKUbqQRNZ8gxAoNcegC0fbdlw7MefpE10Ya10s f37nIaa836ItyxNF6I/aWcnb0dysRdFLY677hYIKbShDb95Ma2x3wnMBA+jZMQfp+n Xlf6BFnXXhPEpCy+Ni1tdppTXFd2zAk2dlrzMvtMrCidOzOi/i6m/uFbGfKekcWcCQ xs6GPZXZMA/oFd0AqfYoPzLaNVDaZTdh4TLBAoRDDgUGqfZbeeKNocE2+8Jm0JrXil yBfB2Vxo2PK4A== Received: from aws-us-west-2-korg-lkml-1.web.codeaurora.org (localhost.localdomain [127.0.0.1]) by smtp.lore.kernel.org (Postfix) with ESMTP id F38861062874; Wed, 11 Mar 2026 12:09:44 +0000 (UTC) From: Leno Hou via B4 Relay Date: Wed, 11 Mar 2026 20:09:43 +0800 Subject: [PATCH v2 2/2] mm/mglru: maintain workingset refault context across state transitions Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260311-b4-switch-mglru-v2-v2-2-080cb9321463@gmail.com> References: <20260311-b4-switch-mglru-v2-v2-0-080cb9321463@gmail.com> In-Reply-To: <20260311-b4-switch-mglru-v2-v2-0-080cb9321463@gmail.com> To: Andrew Morton , Axel Rasmussen , Yuanchu Xie , Wei Xu , Jialing Wang , Yafang Shao , Yu Zhao , Kairui Song , Bingfang Guo , Barry Song Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Leno Hou X-Mailer: b4 0.14.3 X-Developer-Signature: v=1; a=ed25519-sha256; t=1773230982; l=5011; i=lenohou@gmail.com; s=20260311; h=from:subject:message-id; bh=mpnWMW6S89qGDIwaKivQjNoGDsv2soTWaW1Gf6K14EE=; b=xcndljvTygCcwBzsgj8nOw/iRC7PBqzNOjNrqjCnt1Bogi0UldgGc+rLPInShme/2seEjuxVE 5txGqDKNySACcCOr+H31oDye4fgESyq3o03RYnnnbcViC1fsrV+Y3ed X-Developer-Key: i=lenohou@gmail.com; a=ed25519; pk=8AVHXYurzu1kOGjk9rwvxovwSCynBkv2QAcOvSIe1rw= X-Endpoint-Received: by B4 Relay for lenohou@gmail.com/20260311 with auth_id=674 X-Original-From: Leno Hou Reply-To: lenohou@gmail.com From: Leno Hou When MGLRU state is toggled dynamically, existing shadow entries (eviction tokens) lose their context. Traditional LRU and MGLRU handle workingset refaults using different logic. Without context, shadow entries re-activated by the "wrong" reclaim logic trigger excessive page activations (pgactivate) and system thrashing, as the kernel cannot correctly distinguish if a refaulted page was originally managed by MGLRU or the traditional LRU. This patch introduces shadow entry context tracking: - Encode MGLRU origin: Introduce WORKINGSET_MGLRU_SHIFT into the shadow entry (eviction token) encoding. This adds an 'is_mglru' bit to shadow entries, allowing the kernel to correctly identify the originating reclaim logic for a page even after the global MGLRU state has been toggled. - Refault logic dispatch: Use this 'is_mglru' bit in workingset_refault() and workingset_test_recent() to dispatch refault events to the correct handler (lru_gen_refault vs. traditional workingset refault). This ensures that refaulted pages are handled by the appropriate reclaim logic regardless of the current MGLRU enabled state, preventing unnecessary thrashing and state-inconsistent refault activations during state transitions. To: Andrew Morton To: Axel Rasmussen To: Yuanchu Xie To: Wei Xu To: Barry Song <21cnbao@gmail.com> To: Jialing Wang To: Yafang Shao To: Yu Zhao To: Kairui Song To: Bingfang Guo Cc: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Leno Hou --- mm/workingset.c | 19 +++++++++++++------ 1 file changed, 13 insertions(+), 6 deletions(-) diff --git a/mm/workingset.c b/mm/workingset.c index 13422d304715..baa766daac24 100644 --- a/mm/workingset.c +++ b/mm/workingset.c @@ -180,8 +180,10 @@ * refault distance will immediately activate the refaulting page. */ =20 +#define WORKINGSET_MGLRU_SHIFT 1 #define WORKINGSET_SHIFT 1 #define EVICTION_SHIFT ((BITS_PER_LONG - BITS_PER_XA_VALUE) + \ + WORKINGSET_MGLRU_SHIFT + \ WORKINGSET_SHIFT + NODES_SHIFT + \ MEM_CGROUP_ID_SHIFT) #define EVICTION_MASK (~0UL >> EVICTION_SHIFT) @@ -197,12 +199,13 @@ static unsigned int bucket_order __read_mostly; =20 static void *pack_shadow(int memcgid, pg_data_t *pgdat, unsigned long evic= tion, - bool workingset) + bool workingset, bool is_mglru) { eviction &=3D EVICTION_MASK; eviction =3D (eviction << MEM_CGROUP_ID_SHIFT) | memcgid; eviction =3D (eviction << NODES_SHIFT) | pgdat->node_id; eviction =3D (eviction << WORKINGSET_SHIFT) | workingset; + eviction =3D (eviction << WORKINGSET_MGLRU_SHIFT) | is_mglru; =20 return xa_mk_value(eviction); } @@ -214,6 +217,7 @@ static void unpack_shadow(void *shadow, int *memcgidp, = pg_data_t **pgdat, int memcgid, nid; bool workingset; =20 + entry >>=3D WORKINGSET_MGLRU_SHIFT; workingset =3D entry & ((1UL << WORKINGSET_SHIFT) - 1); entry >>=3D WORKINGSET_SHIFT; nid =3D entry & ((1UL << NODES_SHIFT) - 1); @@ -254,7 +258,7 @@ static void *lru_gen_eviction(struct folio *folio) hist =3D lru_hist_from_seq(min_seq); atomic_long_add(delta, &lrugen->evicted[hist][type][tier]); =20 - return pack_shadow(mem_cgroup_private_id(memcg), pgdat, token, workingset= ); + return pack_shadow(mem_cgroup_private_id(memcg), pgdat, token, workingset= , true); } =20 /* @@ -390,7 +394,7 @@ void *workingset_eviction(struct folio *folio, struct m= em_cgroup *target_memcg) VM_BUG_ON_FOLIO(folio_ref_count(folio), folio); VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio); =20 - if (lru_gen_enabled()) + if (folio_lru_gen(folio) !=3D -1) return lru_gen_eviction(folio); =20 lruvec =3D mem_cgroup_lruvec(target_memcg, pgdat); @@ -400,7 +404,7 @@ void *workingset_eviction(struct folio *folio, struct m= em_cgroup *target_memcg) eviction >>=3D bucket_order; workingset_age_nonresident(lruvec, folio_nr_pages(folio)); return pack_shadow(memcgid, pgdat, eviction, - folio_test_workingset(folio)); + folio_test_workingset(folio), false); } =20 /** @@ -426,8 +430,10 @@ bool workingset_test_recent(void *shadow, bool file, b= ool *workingset, int memcgid; struct pglist_data *pgdat; unsigned long eviction; + unsigned long entry =3D xa_to_value(shadow); + bool is_mglru =3D !!(entry & WORKINGSET_MGLRU_SHIFT); =20 - if (lru_gen_enabled()) { + if (is_mglru) { bool recent; =20 rcu_read_lock(); @@ -539,10 +545,11 @@ void workingset_refault(struct folio *folio, void *sh= adow) struct lruvec *lruvec; bool workingset; long nr; + unsigned long entry =3D xa_to_value(shadow); =20 VM_BUG_ON_FOLIO(!folio_test_locked(folio), folio); =20 - if (lru_gen_enabled()) { + if (entry & ((1UL << WORKINGSET_MGLRU_SHIFT) - 1)) { lru_gen_refault(folio, shadow); return; } --=20 2.52.0