From nobody Wed Apr 8 06:42:02 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C6CC43B2FDF for ; Tue, 7 Apr 2026 12:04:51 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775563491; cv=none; b=Uj+qeJBmVvPZP63JYUJNKjbO+cPKYKRCUnHELH3vD6BxIr7OZEy76VOxHsXoNryTChH8gPUx6owS2axNyAOHMM2C9zlZYi+4clBFUkSOfmQknreLN0NkXl2v3uGKwruhhhtOvLvbDvMERvoGoyNN/RS7cuHuXOKNNGd3YNrp8Xo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1775563491; c=relaxed/simple; bh=gNOhspKKafM6N0nXsURc+vN/1RxmMZiW5gmwmDvQBOw=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=s6Fvn18jvG37BAInEYPJNUzeLtVQrt0IHmBXk0uNZvVMd0XxorXcDxYMRrwW5Ss1OuitLeI+6TZ6Lf1nLy16+6YduwdWqRpX+XDMCJ0bNpvhVFhuhJxE/FRyHwoijsoj4CcCg0HWGL2QKuN4HzJckAFiK+HLQlilf52VXJX3poE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=iiH+fePD; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="iiH+fePD" Received: by smtp.kernel.org (Postfix) with ESMTPS id A69F3C116C6; Tue, 7 Apr 2026 12:04:51 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1775563491; bh=gNOhspKKafM6N0nXsURc+vN/1RxmMZiW5gmwmDvQBOw=; h=From:Date:Subject:References:In-Reply-To:To:Cc:Reply-To:From; b=iiH+fePD64yyIbRKkiRtSPKf7bhZKcpcSn/J/KyJX3HCqU/dR6BYmsbXxkvWML898 l1DcQX8pOJ6d9M321s4jYeGXDa66sWtJ1j84GTGbA1OwfgGaWZoGlwkb+iI6W+BHJT fAN5s7vjxF95PhGCwKP8loRjw2lqtKyYD17BBQtdsnCBlSS/f5QOACKRf/K4+DvMz4 E28yIT0f/bYUw1/Zmm0L+kl8GjsMJVL1dbEjGQG0nRKsHRE1zhLP8WFV9vADnl459C bloKAtm/S/SDvUgRimElySGzXU6Jl4YUtlX7rJfHgV7XMJjo63bJ6bZazscl1ICL8J scuRnGeW5SfLw== Received: from aws-us-west-2-korg-lkml-1.web.codeaurora.org (localhost.localdomain [127.0.0.1]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9EB51FEEF2E; Tue, 7 Apr 2026 12:04:51 +0000 (UTC) From: Kairui Song via B4 Relay Date: Tue, 07 Apr 2026 19:57:37 +0800 Subject: [PATCH v4 08/14] mm/mglru: remove redundant swap constrained check upon isolation Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260407-mglru-reclaim-v4-8-98cf3dc69519@tencent.com> References: <20260407-mglru-reclaim-v4-0-98cf3dc69519@tencent.com> In-Reply-To: <20260407-mglru-reclaim-v4-0-98cf3dc69519@tencent.com> To: linux-mm@kvack.org Cc: Andrew Morton , Axel Rasmussen , Yuanchu Xie , Wei Xu , Johannes Weiner , David Hildenbrand , Michal Hocko , Qi Zheng , Shakeel Butt , Lorenzo Stoakes , Barry Song , David Stevens , Chen Ridong , Leno Hou , Yafang Shao , Yu Zhao , Zicheng Wang , Kalesh Singh , Suren Baghdasaryan , Chris Li , Vernon Yang , linux-kernel@vger.kernel.org, Qi Zheng , Baolin Wang , Kairui Song X-Mailer: b4 0.15.1 X-Developer-Signature: v=1; a=ed25519-sha256; t=1775563488; l=1702; i=kasong@tencent.com; s=kasong-sign-tencent; h=from:subject:message-id; bh=8QqmWYtnuzAwnHJdOy0QH3VxtrTCFfqleHPzwbmrh6I=; b=stnnipVCUvw5BM7Mfht/5ngS1C8fGqnhMtTfZsYkGl8C0WNG+0YMdtA71/LikKDgWLCLJxycr bjc6an7gbItCc7yQxHD4KcETzXtC3O8lTBBkFIwLIik3waNSiVls8jo X-Developer-Key: i=kasong@tencent.com; a=ed25519; pk=kCdoBuwrYph+KrkJnrr7Sm1pwwhGDdZKcKrqiK8Y1mI= X-Endpoint-Received: by B4 Relay for kasong@tencent.com/kasong-sign-tencent with auth_id=562 X-Original-From: Kairui Song Reply-To: kasong@tencent.com From: Kairui Song Remove the swap-constrained early reject check upon isolation. This check is a micro optimization when swap IO is not allowed, so folios are rejected early. But it is redundant and overly broad since shrink_folio_list() already handles all these cases with proper granularity. Notably, this check wrongly rejected lazyfree folios, and it doesn't cover all rejection cases. shrink_folio_list() uses may_enter_fs(), which distinguishes non-SWP_FS_OPS devices from filesystem-backed swap and does all the checks after folio is locked, so flags like swap cache are stable. This check also covers dirty file folios, which are not a problem now since sort_folio() already bumps dirty file folios to the next generation, but causes trouble for unifying dirty folio writeback handling. And there should be no performance impact from removing it. We may have lost a micro optimization, but unblocked lazyfree reclaim for NOIO contexts, which is not a common case in the first place. Reviewed-by: Axel Rasmussen Signed-off-by: Kairui Song --- mm/vmscan.c | 6 ------ 1 file changed, 6 deletions(-) diff --git a/mm/vmscan.c b/mm/vmscan.c index 354c6fef3c42..b60cb579c15c 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -4650,12 +4650,6 @@ static bool isolate_folio(struct lruvec *lruvec, str= uct folio *folio, struct sca { bool success; =20 - /* swap constrained */ - if (!(sc->gfp_mask & __GFP_IO) && - (folio_test_dirty(folio) || - (folio_test_anon(folio) && !folio_test_swapcache(folio)))) - return false; - /* raced with release_pages() */ if (!folio_try_get(folio)) return false; --=20 2.53.0