From nobody Wed Apr 8 06:42:02 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 5C2D93B38B0 for ; Tue, 7 Apr 2026 12:04:52 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775563492; cv=none; b=hdIIVF3hiJIIbMpVeQps81AtjvpRQBHj8eOSgih5Pk/5D73D46JWaawexZB4hg9dHhyExOjhSs7diQFfZbwx5PXwL/LfueokYWCD76B0t29NTJEHcLsHtGNfCNMH+q2SNiS4Q2UqW0Ml2RBKNm7jlfFhh5cvaB/oo8Z++tX0PLU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775563492; c=relaxed/simple; bh=ovatNqtznssd/gITN5eCBeKBUZKWvCBBAHjSAQIuwj8=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=tvAWfB9NA+a0h9GT2My/Ts6RkbgxSho1RtD6/MCLvdcDu6Ghh9eCIv8qp1KTfIMUVDuoHHe56mAVnBBO4NG4aSiJKdnrjQRS04h9djTysSHe2Kzyq9iM8XagAF+tOAv93l7blHB0PjpAgxFXUGdldjR4yrLfB1C1bFhtwuUYKqQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=s/JX1guG; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="s/JX1guG" Received: by smtp.kernel.org (Postfix) with ESMTPS id 3CD46C19421; Tue, 7 Apr 2026 12:04:52 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1775563492; bh=ovatNqtznssd/gITN5eCBeKBUZKWvCBBAHjSAQIuwj8=; h=From:Date:Subject:References:In-Reply-To:To:Cc:Reply-To:From; b=s/JX1guGiz4MIPnkH9WWs9QcLNa/45srPIXAOVZgjNb48Fw66ozW6kbdnNOmIYpOn SwTveTLrWBxgDbzfwm3RspZPM+5n6ZsoJCMFdFm/LGQZvB0Qu2q5fi5LNtE4xt7DLA OjCzv3eJjKy10cjmVKXsFXN5i1Wx4UenYGrx5I26OtQF8VQboor5ZaLD0KPDqruVBo n+QszR9xQeAp25i8k/EqMmFD6Qdq3DosVjkApNZ5kp3XEMaIftK1Nk84tUZ3n22AFE CdVbXAOTGhPKsHA9UkVz4RKYUSkBoZ6Gehn69z8gsQi+MOp871PlZbsG7NMrMEySz/ TZV+YH8HCY/4A== Received: from aws-us-west-2-korg-lkml-1.web.codeaurora.org (localhost.localdomain [127.0.0.1]) by smtp.lore.kernel.org (Postfix) with ESMTP id 34D3BFEEF2D; Tue, 7 Apr 2026 12:04:52 +0000 (UTC) From: Kairui Song via B4 Relay Date: Tue, 07 Apr 2026 19:57:43 +0800 Subject: [PATCH v4 14/14] mm/vmscan: unify writeback reclaim statistic and throttling Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260407-mglru-reclaim-v4-14-98cf3dc69519@tencent.com> References: <20260407-mglru-reclaim-v4-0-98cf3dc69519@tencent.com> In-Reply-To: <20260407-mglru-reclaim-v4-0-98cf3dc69519@tencent.com> To: linux-mm@kvack.org Cc: Andrew Morton , Axel Rasmussen , Yuanchu Xie , Wei Xu , Johannes Weiner , David Hildenbrand , Michal Hocko , Qi Zheng , Shakeel Butt , Lorenzo Stoakes , Barry Song , David Stevens , Chen Ridong , Leno Hou , Yafang Shao , Yu Zhao , Zicheng Wang , Kalesh Singh , Suren Baghdasaryan , Chris Li , Vernon Yang , linux-kernel@vger.kernel.org, Qi Zheng , Baolin Wang , Kairui Song X-Mailer: b4 0.15.1 X-Developer-Signature: v=1; a=ed25519-sha256; t=1775563489; l=6741; i=kasong@tencent.com; s=kasong-sign-tencent; h=from:subject:message-id; bh=7QHyjr50/quaZhHvOcbUKxflnau3ozYmg6LdN0uZIvA=; b=5Z8fDGv9OJlrUKzC10NZUWtSzDzWTPu7NVnBtOD+zo9hXGm0HDSYsBVq4BpUHAVZjMRyRn3u0 zP+B6GJnYCBCeEevZhnrIEGrodlWr7M6dJawAaylkpM7qj72/PV5f4s X-Developer-Key: i=kasong@tencent.com; a=ed25519; pk=kCdoBuwrYph+KrkJnrr7Sm1pwwhGDdZKcKrqiK8Y1mI= X-Endpoint-Received: by B4 Relay for kasong@tencent.com/kasong-sign-tencent with auth_id=562 X-Original-From: Kairui Song Reply-To: kasong@tencent.com From: Kairui Song Currently MGLRU and non-MGLRU handle the reclaim statistic and writeback handling very differently, especially throttling. Basically MGLRU just ignored the throttling part. Let's just unify this part, use a helper to deduplicate the code so both setups will share the same behavior. Test using following reproducer using bash: echo "Setup a slow device using dm delay" dd if=3D/dev/zero of=3D/var/tmp/backing bs=3D1M count=3D2048 LOOP=3D$(losetup --show -f /var/tmp/backing) mkfs.ext4 -q $LOOP echo "0 $(blockdev --getsz $LOOP) delay $LOOP 0 0 $LOOP 0 1000" | \ dmsetup create slow_dev mkdir -p /mnt/slow && mount /dev/mapper/slow_dev /mnt/slow echo "Start writeback pressure" sync && echo 3 > /proc/sys/vm/drop_caches mkdir /sys/fs/cgroup/test_wb echo 128M > /sys/fs/cgroup/test_wb/memory.max (echo $BASHPID > /sys/fs/cgroup/test_wb/cgroup.procs && \ dd if=3D/dev/zero of=3D/mnt/slow/testfile bs=3D1M count=3D192) echo "Clean up" echo "0 $(blockdev --getsz $LOOP) error" | dmsetup load slow_dev dmsetup resume slow_dev umount -l /mnt/slow && sync dmsetup remove slow_dev Before this commit, `dd` will get OOM killed immediately if MGLRU is enabled. Classic LRU is fine. After this commit, throttling is now effective and no more spin on LRU or premature OOM. Stress test on other workloads also looking good. Global throttling is not here yet, we will fix that separately later. Suggested-by: Chen Ridong Tested-by: Leno Hou Reviewed-by: Axel Rasmussen Reviewed-by: Baolin Wang Signed-off-by: Kairui Song --- mm/vmscan.c | 90 ++++++++++++++++++++++++++++-----------------------------= ---- 1 file changed, 41 insertions(+), 49 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index cea6df51dcff..609cbaf2cd4e 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1942,6 +1942,44 @@ static int current_may_throttle(void) return !(current->flags & PF_LOCAL_THROTTLE); } =20 +static void handle_reclaim_writeback(unsigned long nr_taken, + struct pglist_data *pgdat, + struct scan_control *sc, + struct reclaim_stat *stat) +{ + /* + * If dirty folios are scanned that are not queued for IO, it + * implies that flushers are not doing their job. This can + * happen when memory pressure pushes dirty folios to the end of + * the LRU before the dirty limits are breached and the dirty + * data has expired. It can also happen when the proportion of + * dirty folios grows not through writes but through memory + * pressure reclaiming all the clean cache. And in some cases, + * the flushers simply cannot keep up with the allocation + * rate. Nudge the flusher threads in case they are asleep. + */ + if (stat->nr_unqueued_dirty =3D=3D nr_taken && nr_taken) { + wakeup_flusher_threads(WB_REASON_VMSCAN); + /* + * For cgroupv1 dirty throttling is achieved by waking up + * the kernel flusher here and later waiting on folios + * which are in writeback to finish (see shrink_folio_list()). + * + * Flusher may not be able to issue writeback quickly + * enough for cgroupv1 writeback throttling to work + * on a large system. + */ + if (!writeback_throttling_sane(sc)) + reclaim_throttle(pgdat, VMSCAN_THROTTLE_WRITEBACK); + } + + sc->nr.dirty +=3D stat->nr_dirty; + sc->nr.congested +=3D stat->nr_congested; + sc->nr.writeback +=3D stat->nr_writeback; + sc->nr.immediate +=3D stat->nr_immediate; + sc->nr.taken +=3D nr_taken; +} + /* * shrink_inactive_list() is a helper for shrink_node(). It returns the n= umber * of reclaimed pages @@ -2005,39 +2043,7 @@ static unsigned long shrink_inactive_list(unsigned l= ong nr_to_scan, lruvec_lock_irq(lruvec); lru_note_cost_unlock_irq(lruvec, file, stat.nr_pageout, nr_scanned - nr_reclaimed); - - /* - * If dirty folios are scanned that are not queued for IO, it - * implies that flushers are not doing their job. This can - * happen when memory pressure pushes dirty folios to the end of - * the LRU before the dirty limits are breached and the dirty - * data has expired. It can also happen when the proportion of - * dirty folios grows not through writes but through memory - * pressure reclaiming all the clean cache. And in some cases, - * the flushers simply cannot keep up with the allocation - * rate. Nudge the flusher threads in case they are asleep. - */ - if (stat.nr_unqueued_dirty =3D=3D nr_taken) { - wakeup_flusher_threads(WB_REASON_VMSCAN); - /* - * For cgroupv1 dirty throttling is achieved by waking up - * the kernel flusher here and later waiting on folios - * which are in writeback to finish (see shrink_folio_list()). - * - * Flusher may not be able to issue writeback quickly - * enough for cgroupv1 writeback throttling to work - * on a large system. - */ - if (!writeback_throttling_sane(sc)) - reclaim_throttle(pgdat, VMSCAN_THROTTLE_WRITEBACK); - } - - sc->nr.dirty +=3D stat.nr_dirty; - sc->nr.congested +=3D stat.nr_congested; - sc->nr.writeback +=3D stat.nr_writeback; - sc->nr.immediate +=3D stat.nr_immediate; - sc->nr.taken +=3D nr_taken; - + handle_reclaim_writeback(nr_taken, pgdat, sc, &stat); trace_mm_vmscan_lru_shrink_inactive(pgdat->node_id, nr_scanned, nr_reclaimed, &stat, sc->priority, file); return nr_reclaimed; @@ -4824,26 +4830,11 @@ static int evict_folios(unsigned long nr_to_scan, s= truct lruvec *lruvec, retry: reclaimed =3D shrink_folio_list(&list, pgdat, sc, &stat, false, memcg); sc->nr_reclaimed +=3D reclaimed; + handle_reclaim_writeback(isolated, pgdat, sc, &stat); trace_mm_vmscan_lru_shrink_inactive(pgdat->node_id, type_scanned, reclaimed, &stat, sc->priority, type ? LRU_INACTIVE_FILE : LRU_INACTIVE_ANON); =20 - /* - * If too many file cache in the coldest generation can't be evicted - * due to being dirty, wake up the flusher. - */ - if (stat.nr_unqueued_dirty =3D=3D isolated) { - wakeup_flusher_threads(WB_REASON_VMSCAN); - - /* - * For cgroupv1 dirty throttling is achieved by waking up - * the kernel flusher here and later waiting on folios - * which are in writeback to finish (see shrink_folio_list()). - */ - if (!writeback_throttling_sane(sc)) - reclaim_throttle(pgdat, VMSCAN_THROTTLE_WRITEBACK); - } - list_for_each_entry_safe_reverse(folio, next, &list, lru) { DEFINE_MIN_SEQ(lruvec); =20 @@ -4886,6 +4877,7 @@ static int evict_folios(unsigned long nr_to_scan, str= uct lruvec *lruvec, =20 if (!list_empty(&list)) { skip_retry =3D true; + isolated =3D 0; goto retry; } =20 --=20 2.53.0