From nobody Thu Apr 2 14:13:09 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id D2FD333FE05 for ; Sat, 28 Mar 2026 19:52:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774727551; cv=none; b=OhmxUv670NE/DleA2u3nOa+sqjdpRgW7FWyBZ6ECU9IU+z8ZhCjkwvqvBTp+Qkb5q27FQTMMHZpVdMcXCpeaEBSeXmi9rlWDfWzAidFTxy4q5JxVql4wvzfR+fcjPsLfLHIGXpMtRBqXXhEP/PWX6P+vI/kTyvcHi6LsiBPJFe4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774727551; c=relaxed/simple; bh=ZrNLUf7TIsnwMuqf3iVQ5egKhLO7tBdBlTFX9MiJW9Y=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=pJPB557x3JrbSyALWLOTBZZbdkJa4jlm7iN8Ozv0tNxJxbQGImVYd90OR5s9efKgiSjCw8pIKy6T3xZDZG9Tb5AthrHE8p1a8M+KZzAbsDm5Zl6mwZsukS3rpMKGK2CO9quTPkoPvqVYeX9fRoQmk+zdm+z6FHl6LQ7sOFrdVo0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=ciw7jlWa; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="ciw7jlWa" Received: by smtp.kernel.org (Postfix) with ESMTPS id AB398C2BCFA; Sat, 28 Mar 2026 19:52:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1774727551; bh=ZrNLUf7TIsnwMuqf3iVQ5egKhLO7tBdBlTFX9MiJW9Y=; h=From:Date:Subject:References:In-Reply-To:To:Cc:Reply-To:From; b=ciw7jlWamFG0Zkvwo8DlgG19vpldBiO2NgEw9S9FuR+GVzehh5aauskJ813ZANJ3n SybWtfhjDYGttM54IG2LKZCv7X1L3UWUbBGDs0IQBOzy2IxsN268HLHlIWiy8rnUs0 CkgSeROdNY1B277COSY/wFoeJ34vAO4lGdjcNHdtMRjjfur09XoK6GOiGhnxt0ja9C 12zt6A8XLIMizDb2C3UyCDO9BBLEnmbMQd5bxVr6a5EE7G76PLoX24sAGc+nFrOaNJ +1KKOSYOaIwkLjjCxAs/KHVWhDHt+BUZJT6UIbvSsNnBkfRxKWJZe4cCd6va6fEnjs cawBN+hsrizGA== Received: from aws-us-west-2-korg-lkml-1.web.codeaurora.org (localhost.localdomain [127.0.0.1]) by smtp.lore.kernel.org (Postfix) with ESMTP id A33AA10F3DF8; Sat, 28 Mar 2026 19:52:31 +0000 (UTC) From: Kairui Song via B4 Relay Date: Sun, 29 Mar 2026 03:52:34 +0800 Subject: [PATCH v2 08/12] mm/mglru: simplify and improve dirty writeback handling Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260329-mglru-reclaim-v2-8-b53a3678513c@tencent.com> References: <20260329-mglru-reclaim-v2-0-b53a3678513c@tencent.com> In-Reply-To: <20260329-mglru-reclaim-v2-0-b53a3678513c@tencent.com> To: linux-mm@kvack.org Cc: Andrew Morton , Axel Rasmussen , Yuanchu Xie , Wei Xu , Johannes Weiner , David Hildenbrand , Michal Hocko , Qi Zheng , Shakeel Butt , Lorenzo Stoakes , Barry Song , David Stevens , Chen Ridong , Leno Hou , Yafang Shao , Yu Zhao , Zicheng Wang , Kalesh Singh , Suren Baghdasaryan , Chris Li , Vernon Yang , linux-kernel@vger.kernel.org, Qi Zheng , Baolin Wang , Kairui Song X-Mailer: b4 0.15.0 X-Developer-Signature: v=1; a=ed25519-sha256; t=1774727549; l=5969; i=kasong@tencent.com; s=kasong-sign-tencent; h=from:subject:message-id; bh=T/G4ibrY7PQDUQFvtmIQoGY8xdh7KRch5mx7kYmgnHQ=; b=657LO41LLkGKMBTi3D7JTHfNytmDYDYIFylcEUbiGOlDeDMnPmHzWiDcdWfm/O6Hjb1mo/uKU +6EKFyu+SyvC+4eYfDrMoYLJ3KBJFuSN+8217Z64/Znx3R7YWF1gCpP X-Developer-Key: i=kasong@tencent.com; a=ed25519; pk=kCdoBuwrYph+KrkJnrr7Sm1pwwhGDdZKcKrqiK8Y1mI= X-Endpoint-Received: by B4 Relay for kasong@tencent.com/kasong-sign-tencent with auth_id=562 X-Original-From: Kairui Song Reply-To: kasong@tencent.com From: Kairui Song The current handling of dirty writeback folios is not working well for file page heavy workloads: Dirty folios are protected and move to next gen upon isolation of getting throttled or reactivation upon pageout (shrink_folio_list). This might help to reduce the LRU lock contention slightly, but as a result, the ping-pong effect of folios between head and tail of last two gens is serious as the shrinker will run into protected dirty writeback folios more frequently compared to activation. The dirty flush wakeup condition is also much more passive compared to active/inactive LRU. Active / inactve LRU wakes the flusher if one batch of folios passed to shrink_folio_list is unevictable due to under writeback, but MGLRU instead has to check this after the whole reclaim loop is done, and then count the isolation protection number compared to the total reclaim number. And we previously saw OOM problems with it, too, which were fixed but still not perfect [1]. So instead, just drop the special handling for dirty writeback, just re-activate it like active / inactive LRU. And also move the dirty flush wake up check right after shrink_folio_list. This should improve both throttling and performance. Test with YCSB workloadb showed a major performance improvement: Before this series: Throughput(ops/sec): 61642.78008938203 AverageLatency(us): 507.11127774145166 pgpgin 158190589 pgpgout 5880616 workingset_refault 7262988 After this commit: Throughput(ops/sec): 80216.04855744806 (+30.1%, higher is better) AverageLatency(us): 388.17633477268913 (-23.5%, lower is better) pgpgin 101871227 (-35.6%, lower is better) pgpgout 5770028 workingset_refault 3418186 (-52.9%, lower is better) The refault rate is ~50% lower, and throughput is ~30% higher, which is a huge gain. We also observed significant performance gain for other real-world workloads. We were concerned that the dirty flush could cause more wear for SSD: that should not be the problem here, since the wakeup condition is when the dirty folios have been pushed to the tail of LRU, which indicates that memory pressure is so high that writeback is blocking the workload already. Reviewed-by: Axel Rasmussen Link: https://lore.kernel.org/linux-mm/20241026115714.1437435-1-jingxiangze= ng.cas@gmail.com/ [1] Signed-off-by: Kairui Song --- mm/vmscan.c | 57 ++++++++++++++++----------------------------------------- 1 file changed, 16 insertions(+), 41 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 8de5c8d5849e..17b5318fad39 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -4583,7 +4583,6 @@ static bool sort_folio(struct lruvec *lruvec, struct = folio *folio, struct scan_c int tier_idx) { bool success; - bool dirty, writeback; int gen =3D folio_lru_gen(folio); int type =3D folio_is_file_lru(folio); int zone =3D folio_zonenum(folio); @@ -4633,21 +4632,6 @@ static bool sort_folio(struct lruvec *lruvec, struct= folio *folio, struct scan_c return true; } =20 - dirty =3D folio_test_dirty(folio); - writeback =3D folio_test_writeback(folio); - if (type =3D=3D LRU_GEN_FILE && dirty) { - sc->nr.file_taken +=3D delta; - if (!writeback) - sc->nr.unqueued_dirty +=3D delta; - } - - /* waiting for writeback */ - if (writeback || (type =3D=3D LRU_GEN_FILE && dirty)) { - gen =3D folio_inc_gen(lruvec, folio, true); - list_move(&folio->lru, &lrugen->folios[gen][type][zone]); - return true; - } - return false; } =20 @@ -4754,8 +4738,6 @@ static int scan_folios(unsigned long nr_to_scan, stru= ct lruvec *lruvec, trace_mm_vmscan_lru_isolate(sc->reclaim_idx, sc->order, nr_to_scan, scanned, skipped, isolated, type ? LRU_INACTIVE_FILE : LRU_INACTIVE_ANON); - if (type =3D=3D LRU_GEN_FILE) - sc->nr.file_taken +=3D isolated; =20 *isolatedp =3D isolated; return scanned; @@ -4858,12 +4840,27 @@ static int evict_folios(unsigned long nr_to_scan, s= truct lruvec *lruvec, return scanned; retry: reclaimed =3D shrink_folio_list(&list, pgdat, sc, &stat, false, memcg); - sc->nr.unqueued_dirty +=3D stat.nr_unqueued_dirty; sc->nr_reclaimed +=3D reclaimed; trace_mm_vmscan_lru_shrink_inactive(pgdat->node_id, type_scanned, reclaimed, &stat, sc->priority, type ? LRU_INACTIVE_FILE : LRU_INACTIVE_ANON); =20 + /* + * If too many file cache in the coldest generation can't be evicted + * due to being dirty, wake up the flusher. + */ + if (stat.nr_unqueued_dirty =3D=3D isolated) { + wakeup_flusher_threads(WB_REASON_VMSCAN); + + /* + * For cgroupv1 dirty throttling is achieved by waking up + * the kernel flusher here and later waiting on folios + * which are in writeback to finish (see shrink_folio_list()). + */ + if (!writeback_throttling_sane(sc)) + reclaim_throttle(pgdat, VMSCAN_THROTTLE_WRITEBACK); + } + list_for_each_entry_safe_reverse(folio, next, &list, lru) { DEFINE_MIN_SEQ(lruvec); =20 @@ -5020,28 +5017,6 @@ static bool try_to_shrink_lruvec(struct lruvec *lruv= ec, struct scan_control *sc) cond_resched(); } =20 - /* - * If too many file cache in the coldest generation can't be evicted - * due to being dirty, wake up the flusher. - */ - if (sc->nr.unqueued_dirty && sc->nr.unqueued_dirty =3D=3D sc->nr.file_tak= en) { - struct pglist_data *pgdat =3D lruvec_pgdat(lruvec); - - wakeup_flusher_threads(WB_REASON_VMSCAN); - - /* - * For cgroupv1 dirty throttling is achieved by waking up - * the kernel flusher here and later waiting on folios - * which are in writeback to finish (see shrink_folio_list()). - * - * Flusher may not be able to issue writeback quickly - * enough for cgroupv1 writeback throttling to work - * on a large system. - */ - if (!writeback_throttling_sane(sc)) - reclaim_throttle(pgdat, VMSCAN_THROTTLE_WRITEBACK); - } - return need_rotate; } =20 --=20 2.53.0