From nobody Mon Apr 6 15:27:42 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A1AA935A39D for ; Thu, 2 Apr 2026 18:53:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775156021; cv=none; b=bkgft1AsbxxKXSVnlngU18bkxSdJo2VeFjfvE8G5+xz7e0UQXS/6Wl7MK0eEFC5/4Y6ZfDr+6J+umKckPpU5XhLZT1hIYVW0UNCyH2sEoUOTsmxhovuzqmoYgHCNwg48PKvtIFfpqPXagoqhL1g5OT2IJydLQ+I8xwQUArsIFdA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775156021; c=relaxed/simple; bh=hFax/AJR7bXH8oWQu+9VlOcr2nxZvl2KLMxTavwO1yQ=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=VD8qG3bRsAXVdAqidSYYKmDvMl+RRm8rNasG8O8vQcFmAzny3+16G9MD8sly7hJWcGh7RiWDoXY7xYMVZkwEwq5j2U1EFHweU35esYhOLzAGc8GC/rYRQoGpnz5+vCZKfht8EYO0tihJ7+w5R98cBZF6weMqfJCooqlCzydaLBw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=sftjierH; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="sftjierH" Received: by smtp.kernel.org (Postfix) with ESMTPS id 6405DC2BCB0; Thu, 2 Apr 2026 18:53:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1775156021; bh=hFax/AJR7bXH8oWQu+9VlOcr2nxZvl2KLMxTavwO1yQ=; h=From:Date:Subject:References:In-Reply-To:To:Cc:Reply-To:From; b=sftjierHx+QAPwJU4sSL/inATfAl9632hi7PgveGFykQPscPPeSTxP1tm21AXqCpy Yndwpgn0xQ5Fthh/p8elPDArPRpswmExNmKWvCWQwj2IF3/0g9KRks7Wm2Y4NNS5zi HViVifu057I8FmAOrvAQ0imIrOU5HZe6xteFqPAgLg3F06JQFRPgsQm0AJdK64MuWO cjb04enNHyIHSSUdceSQBOH3wJDiBBPH2eVdj/CAdnpm8VTMlZ0Hg5rZDSzlqmofyl LVTg6W1nASm9MsSF19dxk6Jj3OilnasP0770WaMwICerHjOs7AmND08H62zzsZSRO5 S6YyM8ZWuzFRg== Received: from aws-us-west-2-korg-lkml-1.web.codeaurora.org (localhost.localdomain [127.0.0.1]) by smtp.lore.kernel.org (Postfix) with ESMTP id 59554D6AAF9; Thu, 2 Apr 2026 18:53:41 +0000 (UTC) From: Kairui Song via B4 Relay Date: Fri, 03 Apr 2026 02:53:30 +0800 Subject: [PATCH v3 04/14] mm/mglru: restructure the reclaim loop Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260403-mglru-reclaim-v3-4-a285efd6ff91@tencent.com> References: <20260403-mglru-reclaim-v3-0-a285efd6ff91@tencent.com> In-Reply-To: <20260403-mglru-reclaim-v3-0-a285efd6ff91@tencent.com> To: linux-mm@kvack.org Cc: Andrew Morton , Axel Rasmussen , Yuanchu Xie , Wei Xu , Johannes Weiner , David Hildenbrand , Michal Hocko , Qi Zheng , Shakeel Butt , Lorenzo Stoakes , Barry Song , David Stevens , Chen Ridong , Leno Hou , Yafang Shao , Yu Zhao , Zicheng Wang , Kalesh Singh , Suren Baghdasaryan , Chris Li , Vernon Yang , linux-kernel@vger.kernel.org, Qi Zheng , Baolin Wang , Kairui Song X-Mailer: b4 0.15.1 X-Developer-Signature: v=1; a=ed25519-sha256; t=1775156018; l=6279; i=kasong@tencent.com; s=kasong-sign-tencent; h=from:subject:message-id; bh=n1QR1XMc/29NkkvOx9rKp1KXHPY21ys1yUuIEmcMTFs=; b=G2FlHwh8fkci9hkqgt8brGGRiFDG3vSgUQKIiPALDlcX9XXO7hjk0OmtODfv1ItaZGTgJN5e8 wl31xAq/atNCjkLJtnO4p+155BUSX2PKlRhdTZPG/V5WmwIHXdwQ6gU X-Developer-Key: i=kasong@tencent.com; a=ed25519; pk=kCdoBuwrYph+KrkJnrr7Sm1pwwhGDdZKcKrqiK8Y1mI= X-Endpoint-Received: by B4 Relay for kasong@tencent.com/kasong-sign-tencent with auth_id=562 X-Original-From: Kairui Song Reply-To: kasong@tencent.com From: Kairui Song The current loop will calculate the scan number on each iteration. The number of folios to scan is based on the LRU length, with some unclear behaviors, eg, the scan number is only shifted by reclaim priority when aging is not needed or when at the default priority, and it couples the number calculation with aging and rotation. Adjust, simplify it, and decouple aging and rotation. Just calculate the scan number for once at the beginning of the reclaim, always respect the reclaim priority, and make the aging and rotation more explicit. This slightly changes how aging and offline memcg reclaim works: Previously, aging was always skipped at DEF_PRIORITY even when eviction was impossible. Now, aging is always triggered when it is necessary to make progress. The old behavior may waste a reclaim iteration only to escalate priority, potentially causing over-reclaim of slab and breaking reclaim balance in multi-cgroup setups. Similar for offline memcg. Previously, offline memcg wouldn't be aged unless it didn't have any evictable folios. Now, we might age it if it has only 3 generations and the reclaim priority is less than DEF_PRIORITY, which should be fine. On one hand, offline memcg might still hold long-term folios, and in fact, a long-existing offline memcg must be pinned by some long-term folios like shmem. These folios might be used by other memcg, so aging them as ordinary memcg seems correct. Besides, aging enables further reclaim of an offlined memcg, which will certainly happen if we keep shrinking it. And offline memcg might soon be no longer an issue with reparenting. Overall, the memcg LRU rotation, as described in mmzone.h, remains the same. Reviewed-by: Axel Rasmussen Signed-off-by: Kairui Song --- mm/vmscan.c | 74 +++++++++++++++++++++++++++++++++------------------------= ---- 1 file changed, 40 insertions(+), 34 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 963362523782..93ffb3d98fed 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -4913,49 +4913,44 @@ static int evict_folios(unsigned long nr_to_scan, s= truct lruvec *lruvec, } =20 static bool should_run_aging(struct lruvec *lruvec, unsigned long max_seq, - int swappiness, unsigned long *nr_to_scan) + struct scan_control *sc, int swappiness) { DEFINE_MIN_SEQ(lruvec); =20 - *nr_to_scan =3D 0; /* have to run aging, since eviction is not possible anymore */ if (evictable_min_seq(min_seq, swappiness) + MIN_NR_GENS > max_seq) return true; =20 - *nr_to_scan =3D lruvec_evictable_size(lruvec, swappiness); + /* try to get away with not aging at the default priority */ + if (sc->priority =3D=3D DEF_PRIORITY) + return false; + /* better to run aging even though eviction is still possible */ return evictable_min_seq(min_seq, swappiness) + MIN_NR_GENS =3D=3D max_se= q; } =20 -/* - * For future optimizations: - * 1. Defer try_to_inc_max_seq() to workqueues to reduce latency for memcg - * reclaim. - */ -static long get_nr_to_scan(struct lruvec *lruvec, struct scan_control *sc,= int swappiness) +static long get_nr_to_scan(struct lruvec *lruvec, struct scan_control *sc, + struct mem_cgroup *memcg, int swappiness) { - bool need_aging; - unsigned long nr_to_scan; - struct mem_cgroup *memcg =3D lruvec_memcg(lruvec); - DEFINE_MAX_SEQ(lruvec); - - if (mem_cgroup_below_min(sc->target_mem_cgroup, memcg)) - return -1; - - need_aging =3D should_run_aging(lruvec, max_seq, swappiness, &nr_to_scan); + unsigned long evictable, nr_to_scan; =20 + evictable =3D lruvec_evictable_size(lruvec, swappiness); + nr_to_scan =3D evictable; /* try to scrape all its memory if this memcg was deleted */ - if (nr_to_scan && !mem_cgroup_online(memcg)) + if (!mem_cgroup_online(memcg)) return nr_to_scan; =20 nr_to_scan =3D apply_proportional_protection(memcg, sc, nr_to_scan); =20 - /* try to get away with not aging at the default priority */ - if (!need_aging || sc->priority =3D=3D DEF_PRIORITY) - return nr_to_scan >> sc->priority; + /* + * Always respect scan priority, minimally target some folios + * to keep reclaim moving forwards. + */ + nr_to_scan >>=3D sc->priority; + if (!nr_to_scan) + nr_to_scan =3D min(evictable, SWAP_CLUSTER_MAX); =20 - /* stop scanning this lruvec as it's low on cold folios */ - return try_to_inc_max_seq(lruvec, max_seq, swappiness, false) ? -1 : 0; + return nr_to_scan; } =20 static bool should_abort_scan(struct lruvec *lruvec, struct scan_control *= sc) @@ -4985,31 +4980,43 @@ static bool should_abort_scan(struct lruvec *lruvec= , struct scan_control *sc) return true; } =20 +/* + * For future optimizations: + * 1. Defer try_to_inc_max_seq() to workqueues to reduce latency for memcg + * reclaim. + */ static bool try_to_shrink_lruvec(struct lruvec *lruvec, struct scan_contro= l *sc) { + bool need_rotate =3D false; long nr_batch, nr_to_scan; - unsigned long scanned =3D 0; int swappiness =3D get_swappiness(lruvec, sc); + struct mem_cgroup *memcg =3D lruvec_memcg(lruvec); =20 - while (true) { + nr_to_scan =3D get_nr_to_scan(lruvec, sc, memcg, swappiness); + while (nr_to_scan > 0) { int delta; + DEFINE_MAX_SEQ(lruvec); =20 - nr_to_scan =3D get_nr_to_scan(lruvec, sc, swappiness); - if (nr_to_scan <=3D 0) + if (mem_cgroup_below_min(sc->target_mem_cgroup, memcg)) { + need_rotate =3D true; break; + } + + if (should_run_aging(lruvec, max_seq, sc, swappiness)) { + if (try_to_inc_max_seq(lruvec, max_seq, swappiness, false)) + need_rotate =3D true; + break; + } =20 nr_batch =3D min(nr_to_scan, MAX_LRU_BATCH); delta =3D evict_folios(nr_batch, lruvec, sc, swappiness); if (!delta) break; =20 - scanned +=3D delta; - if (scanned >=3D nr_to_scan) - break; - if (should_abort_scan(lruvec, sc)) break; =20 + nr_to_scan -=3D delta; cond_resched(); } =20 @@ -5035,8 +5042,7 @@ static bool try_to_shrink_lruvec(struct lruvec *lruve= c, struct scan_control *sc) reclaim_throttle(pgdat, VMSCAN_THROTTLE_WRITEBACK); } =20 - /* whether this lruvec should be rotated */ - return nr_to_scan < 0; + return need_rotate; } =20 static int shrink_one(struct lruvec *lruvec, struct scan_control *sc) --=20 2.53.0