From nobody Mon Apr 6 15:27:48 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E7A9D3D8917 for ; Thu, 2 Apr 2026 18:53:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775156022; cv=none; b=eNe3DY3NiQOh2m/zuizD9fdi1e3CkYEOQAYGaw57vUdd/1DPQHvZFHBgbdOE4cGlghYCQFpRIQI7Ut2rdUcTs0zlBnmS4ZaXfnRllhLe3Q8Rn+EoHtLIvs5tT2A/Wk8JrsznkUH2XqXpi8FgWIXacGQ/Sm/u1YilZ+2KGZ3RxHM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775156022; c=relaxed/simple; bh=IYAPEgDe1ZC9FL2VLGQGQRai548GZydVegIAQogLakg=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=uReBPpwGftcGCzxnvQ6nflyCNuv1zrY+BuAZmCefrRXf2vPkPLphH34CMT9vwGyUY0gYJ3+B47vghVe5AtqQ14xclOWf85nJej+co+me71Rp/F0HymrTlTQLbmy7gBxtSZcxH6EI+JfRbvqs0ADUUq99QdI4U61MfT6THEAA+eM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=YyGJbVsL; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="YyGJbVsL" Received: by smtp.kernel.org (Postfix) with ESMTPS id B7302C2BCB3; Thu, 2 Apr 2026 18:53:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1775156021; bh=IYAPEgDe1ZC9FL2VLGQGQRai548GZydVegIAQogLakg=; h=From:Date:Subject:References:In-Reply-To:To:Cc:Reply-To:From; b=YyGJbVsLOaqsX7/m/sGgyQr/+58MOXeZj/9YxYzF7+zfKLiv6pCWbSqT2V48oBCNh jHozI9+s9ZjvnMbuX89YD7cIEJlf4p41nw52ppWQyEnAa+xssAiDjYCQ2nWE3C6w0J 7xcDsqdRVbMV8EibxJ0Sb9GPxv2NuBiTEZf/Fe5Sn7DVhTQnWAlepGo+I7PpgmzbGQ JrtL19GUk91Ke2JkXCjn57o2FbO7qNBUCFDu+YfRIY149vacCjlRb1rm9S+Fnpcyxm J93BMkcdriWIoITyFhmoRUjBgZ8X422SkmtO++nJ/LNSDTg2LZm5M2NRk7iHsDK/iK OVviwYmLXhBsQ== Received: from aws-us-west-2-korg-lkml-1.web.codeaurora.org (localhost.localdomain [127.0.0.1]) by smtp.lore.kernel.org (Postfix) with ESMTP id A40F3D6AAF8; Thu, 2 Apr 2026 18:53:41 +0000 (UTC) From: Kairui Song via B4 Relay Date: Fri, 03 Apr 2026 02:53:34 +0800 Subject: [PATCH v3 08/14] mm/mglru: remove redundant swap constrained check upon isolation Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260403-mglru-reclaim-v3-8-a285efd6ff91@tencent.com> References: <20260403-mglru-reclaim-v3-0-a285efd6ff91@tencent.com> In-Reply-To: <20260403-mglru-reclaim-v3-0-a285efd6ff91@tencent.com> To: linux-mm@kvack.org Cc: Andrew Morton , Axel Rasmussen , Yuanchu Xie , Wei Xu , Johannes Weiner , David Hildenbrand , Michal Hocko , Qi Zheng , Shakeel Butt , Lorenzo Stoakes , Barry Song , David Stevens , Chen Ridong , Leno Hou , Yafang Shao , Yu Zhao , Zicheng Wang , Kalesh Singh , Suren Baghdasaryan , Chris Li , Vernon Yang , linux-kernel@vger.kernel.org, Qi Zheng , Baolin Wang , Kairui Song X-Mailer: b4 0.15.1 X-Developer-Signature: v=1; a=ed25519-sha256; t=1775156018; l=1646; i=kasong@tencent.com; s=kasong-sign-tencent; h=from:subject:message-id; bh=1TfqZLNVczjhCnyip5hVhPMQqwJr5jWsOeWwr3Bdzvk=; b=ilTesXd1t2BD/bIF52H2EsuropqcuewGrWOmfkHjdaL8AkqD8RL1vxH2/gVtcW3I8zauOsvna tPUt+JQZf7PAtOK5rLkTU8LnR4Kn4yJVmFGtH21QztflnbfWDQspOoQ X-Developer-Key: i=kasong@tencent.com; a=ed25519; pk=kCdoBuwrYph+KrkJnrr7Sm1pwwhGDdZKcKrqiK8Y1mI= X-Endpoint-Received: by B4 Relay for kasong@tencent.com/kasong-sign-tencent with auth_id=562 X-Original-From: Kairui Song Reply-To: kasong@tencent.com From: Kairui Song Remove the swap-constrained early reject check upon isolation. This check is a micro optimization when swap IO is not allowed, so folios are rejected early. But it is redundant and overly broad since shrink_folio_list() already handles all these cases with proper granularity. Notably, this check wrongly rejected lazyfree folios, and it doesn't cover all rejection cases. shrink_folio_list() uses may_enter_fs(), which distinguishes non-SWP_FS_OPS devices from filesystem-backed swap and does all the checks after folio is locked, so flags like swap cache are stable. This check also covers dirty file folios, which are not a problem now since sort_folio() already bumps dirty file folios to the next generation, but causes trouble for unifying dirty folio writeback handling. And there should be no performance impact from removing it. We may have lost a micro optimization, but unblocked lazyfree reclaim for NOIO contexts, which is not a common case in the first place. Signed-off-by: Kairui Song --- mm/vmscan.c | 6 ------ 1 file changed, 6 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index b3371877762a..9f4512a4d35f 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -4650,12 +4650,6 @@ static bool isolate_folio(struct lruvec *lruvec, str= uct folio *folio, struct sca { bool success; =20 - /* swap constrained */ - if (!(sc->gfp_mask & __GFP_IO) && - (folio_test_dirty(folio) || - (folio_test_anon(folio) && !folio_test_swapcache(folio)))) - return false; - /* raced with release_pages() */ if (!folio_try_get(folio)) return false; --=20 2.53.0