From nobody Mon Apr 6 15:27:42 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E7A0A3D8128 for ; Thu, 2 Apr 2026 18:53:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775156022; cv=none; b=A3QVITPovuG49DDGrdUqIflVMq9cxsjOzvQTm6gvyt/GIC/0sD8YNIJnazaTOzZPi1QQN6snNQpmm6SuY2o+JYhGaD3G7EdSDqskCfzXw8cwpnfIkH7jmZ+ZO7uvJntgL1lLkRo2Un208EYn5WYHqSXtv6/uOuxtiW8Ex2rloss= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775156022; c=relaxed/simple; bh=2VrNcvAFxwhtYfLD7SRMhvh+xHXxnXAiLxIFBOgX9kM=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=Do55qztDfSLau6VDgsD7hAKTKHLM1IVVnzzBB9/PwgNV0oAS5zFwAKDrGgp+nsddFhDQTRKq63qCIZ47zLjUo5ZfVJaxCaJXC31VqhrrCT+wjXJDdrlTN4h0wM2ryhhGdAjie/VxAEObhY1Im6+S4CKdnrBdlY/YzGVU5N3TtaQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=ZBSrQmqC; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="ZBSrQmqC" Received: by smtp.kernel.org (Postfix) with ESMTPS id C2063C2BC9E; Thu, 2 Apr 2026 18:53:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1775156021; bh=2VrNcvAFxwhtYfLD7SRMhvh+xHXxnXAiLxIFBOgX9kM=; h=From:Date:Subject:References:In-Reply-To:To:Cc:Reply-To:From; b=ZBSrQmqCj9MVilaJpGU5xhFHpz2NUX05pb/bCGx9CPAmui3vbKnwQWccuun9W4/Ck t9uYKsabmUBHpmsy1CXKZiOXeotR/V88JdhcrrM3i3UGEcJ7RSdd8mmZzpWRjUTCPS lbjHbuH1+3dm25YC/6ogvq+WLPcEv/OGaBrMQqhzHxa3oJykvE9jryelyER70XN5N2 6HWm7kenBoVY3TW36hZWxtRSC4uXr7sioS6ZsMDtlDtXf+HTvL/Ih6tyWcC05Sq24D FafDFPZ0CDAG2bK9KRfmF8evUk/15/KJksYhbZ4CMB7wxb2NAt+UbZ3Q3+AlRLiMtS LYoWnSCxci68A== Received: from aws-us-west-2-korg-lkml-1.web.codeaurora.org (localhost.localdomain [127.0.0.1]) by smtp.lore.kernel.org (Postfix) with ESMTP id B6B15D6AAFA; Thu, 2 Apr 2026 18:53:41 +0000 (UTC) From: Kairui Song via B4 Relay Date: Fri, 03 Apr 2026 02:53:35 +0800 Subject: [PATCH v3 09/14] mm/mglru: use the common routine for dirty/writeback reactivation Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260403-mglru-reclaim-v3-9-a285efd6ff91@tencent.com> References: <20260403-mglru-reclaim-v3-0-a285efd6ff91@tencent.com> In-Reply-To: <20260403-mglru-reclaim-v3-0-a285efd6ff91@tencent.com> To: linux-mm@kvack.org Cc: Andrew Morton , Axel Rasmussen , Yuanchu Xie , Wei Xu , Johannes Weiner , David Hildenbrand , Michal Hocko , Qi Zheng , Shakeel Butt , Lorenzo Stoakes , Barry Song , David Stevens , Chen Ridong , Leno Hou , Yafang Shao , Yu Zhao , Zicheng Wang , Kalesh Singh , Suren Baghdasaryan , Chris Li , Vernon Yang , linux-kernel@vger.kernel.org, Qi Zheng , Baolin Wang , Kairui Song X-Mailer: b4 0.15.1 X-Developer-Signature: v=1; a=ed25519-sha256; t=1775156018; l=2740; i=kasong@tencent.com; s=kasong-sign-tencent; h=from:subject:message-id; bh=1fGU9ybQ5gFaLofblHVGMFlVdGupfTZeucnLqWkcozw=; b=qbDO5xQDJ/A3sbmqRNwdWrxQdbrMGyslVtbnU0pYRi2mPjxtxWVTHjvGTF6EiGkv24u4OlDCw uu28LZ4FGAoBj58CCG7sr1Q6XlUF6m9vA6Hfib0XVBDjXh20yZD0uh6 X-Developer-Key: i=kasong@tencent.com; a=ed25519; pk=kCdoBuwrYph+KrkJnrr7Sm1pwwhGDdZKcKrqiK8Y1mI= X-Endpoint-Received: by B4 Relay for kasong@tencent.com/kasong-sign-tencent with auth_id=562 X-Original-From: Kairui Song Reply-To: kasong@tencent.com From: Kairui Song Currently MGLRU will move the dirty writeback folios to the second oldest gen instead of reactivate them like the classical LRU. This might help to reduce the LRU contention as it skipped the isolation. But as a result we will see these folios at the LRU tail more frequently leading to inefficient reclaim. Besides, the dirty / writeback check after isolation in shrink_folio_list is more accurate and covers more cases. So instead, just drop the special handling for dirty writeback, use the common routine and re-activate it like the classical LRU. This should in theory improve the scan efficiency. These folios will be rotated back to LRU tail once writeback is done so there is no risk of hotness inversion. And now each reclaim loop will have a higher success rate. This also prepares for unifying the writeback and throttling mechanism with classical LRU, we keep these folios far from tail so detecting the tail batch will have a similar pattern with classical LRU. The micro optimization that avoids LRU contention by skipping the isolation is gone, which should be fine. Compared to IO and writeback cost, the isolation overhead is trivial. Reviewed-by: Axel Rasmussen Signed-off-by: Kairui Song --- mm/vmscan.c | 19 ------------------- 1 file changed, 19 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 9f4512a4d35f..2a36cf937061 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -4578,7 +4578,6 @@ static bool sort_folio(struct lruvec *lruvec, struct = folio *folio, struct scan_c int tier_idx) { bool success; - bool dirty, writeback; int gen =3D folio_lru_gen(folio); int type =3D folio_is_file_lru(folio); int zone =3D folio_zonenum(folio); @@ -4628,21 +4627,6 @@ static bool sort_folio(struct lruvec *lruvec, struct= folio *folio, struct scan_c return true; } =20 - dirty =3D folio_test_dirty(folio); - writeback =3D folio_test_writeback(folio); - if (type =3D=3D LRU_GEN_FILE && dirty) { - sc->nr.file_taken +=3D delta; - if (!writeback) - sc->nr.unqueued_dirty +=3D delta; - } - - /* waiting for writeback */ - if (writeback || (type =3D=3D LRU_GEN_FILE && dirty)) { - gen =3D folio_inc_gen(lruvec, folio, true); - list_move(&folio->lru, &lrugen->folios[gen][type][zone]); - return true; - } - return false; } =20 @@ -4664,9 +4648,6 @@ static bool isolate_folio(struct lruvec *lruvec, stru= ct folio *folio, struct sca if (!folio_test_referenced(folio)) set_mask_bits(&folio->flags.f, LRU_REFS_MASK, 0); =20 - /* for shrink_folio_list() */ - folio_clear_reclaim(folio); - success =3D lru_gen_del_folio(lruvec, folio, true); VM_WARN_ON_ONCE_FOLIO(!success, folio); =20 --=20 2.53.0