From nobody Fri Apr 3 01:24:28 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C325E37883E for ; Tue, 17 Feb 2026 20:06:49 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1771358809; cv=none; b=NC93QZwqikhLnIxiFjHL4StZaDCAb+ZncA/FbjfsbJHQZ7CIztsUOpKck3ctv91i6W8/Men+MdviqxbDigrzC+pCSU1+/MijA0suE6VCmek3ib4K9M4BbpCMrKDoXUGZjvNJVpihOs51D+5GyWq532NZvJrrp6misz/UXJbr8Z4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1771358809; c=relaxed/simple; bh=XrJFj0aiyesCR8sfr6EgT7R3o4kY+o6mZkv/nmq+d2U=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=QUyzUwtdQUDktz/oD61b6WTMBwvzuUOuepBQEfVA+g6fIbEN7BZ84xUbmDLfMEMXoJuvWG0a8Lg7lRFfeurkT7D1Wm2H/EkGmwjcLvL7pqJAmrwVgYTV0hsSoZOUa44gFw5ex+2EEOBx1uC3K1ezC2ShzqrHg1heI1yRT8QPzRE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=nM3O823Y; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="nM3O823Y" Received: by smtp.kernel.org (Postfix) with ESMTPS id A4AAFC2BCB8; Tue, 17 Feb 2026 20:06:49 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1771358809; bh=XrJFj0aiyesCR8sfr6EgT7R3o4kY+o6mZkv/nmq+d2U=; h=From:Date:Subject:References:In-Reply-To:To:Cc:Reply-To:From; b=nM3O823YPdawuA2G1F7lk9UBV6WVzpxkN6nXhgdNdg6no0Ztv0ZBaZqFwWToVMzWV nVdkVCzmEb0jfrmoCRBy07ZwYDrW2yq3If5vUTzXTVEsYLe42Z0XVDne3XR/ez3Qoe pwL3b6kquDjsL/M5AkLbtkSTnXFcgy5CLmefvj51aa7kmCnT4Tds+Hs+ksqHx2IGay XMS+9kQJ9EbmlASdJ6GfRhOaHgpnTjHLn27QL6BIlRN9AN4Dp638owi0rJVQ/ROeBx Y6U6U8rPv0VSwOBmhDIFch/LVhpn2JDEVzlCsu9tQkUqGFFOa6XXp0VoP6CN4HBWb1 /jOxETzpkMrSg== Received: from aws-us-west-2-korg-lkml-1.web.codeaurora.org (localhost.localdomain [127.0.0.1]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9CA73E9A046; Tue, 17 Feb 2026 20:06:49 +0000 (UTC) From: Kairui Song via B4 Relay Date: Wed, 18 Feb 2026 04:06:37 +0800 Subject: [PATCH v3 12/12] mm, swap: no need to clear the shadow explicitly Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260218-swap-table-p3-v3-12-f4e34be021a7@tencent.com> References: <20260218-swap-table-p3-v3-0-f4e34be021a7@tencent.com> In-Reply-To: <20260218-swap-table-p3-v3-0-f4e34be021a7@tencent.com> To: linux-mm@kvack.org Cc: Andrew Morton , Kemeng Shi , Nhat Pham , Baoquan He , Barry Song , Johannes Weiner , David Hildenbrand , Lorenzo Stoakes , Youngjun Park , linux-kernel@vger.kernel.org, Chris Li , Kairui Song X-Mailer: b4 0.14.3 X-Developer-Signature: v=1; a=ed25519-sha256; t=1771358806; l=3144; i=kasong@tencent.com; s=kasong-sign-tencent; h=from:subject:message-id; bh=Q0lBjSShyhdY2RwLncpL2hL+mRqIuErD7BuuaEl7Tog=; b=8HJd87buS9O2RDjBJVI/JPFWqAGjZ/tZ8SHZCwdUyevLQFXEz7hNeBSGpP8VDEko1e2W88Cbr f75J32I17ctAjc73AYgXB2x0BXnap1ldmE0AEALFXDI6Mv1JtT/gaRx X-Developer-Key: i=kasong@tencent.com; a=ed25519; pk=kCdoBuwrYph+KrkJnrr7Sm1pwwhGDdZKcKrqiK8Y1mI= X-Endpoint-Received: by B4 Relay for kasong@tencent.com/kasong-sign-tencent with auth_id=562 X-Original-From: Kairui Song Reply-To: kasong@tencent.com From: Kairui Song Since we no longer bypass the swap cache, every swap-in will clear the swap shadow by inserting the folio into the swap table. The only place we may seem to need to free the swap shadow is when the swap slots are freed directly without a folio (swap_put_entries_direct). But with the swap table, that is not needed either. Freeing a slot in the swap table will set the table entry to NULL, which erases the shadow just fine. So just delete all explicit shadow clearing, it's no longer needed. Also, rearrange the freeing. Signed-off-by: Kairui Song Acked-by: Chris Li --- mm/swap.h | 1 - mm/swap_state.c | 21 --------------------- mm/swapfile.c | 2 -- 3 files changed, 24 deletions(-) diff --git a/mm/swap.h b/mm/swap.h index 9728e6a944b2..a77016f2423b 100644 --- a/mm/swap.h +++ b/mm/swap.h @@ -290,7 +290,6 @@ void __swap_cache_del_folio(struct swap_cluster_info *c= i, struct folio *folio, swp_entry_t entry, void *shadow); void __swap_cache_replace_folio(struct swap_cluster_info *ci, struct folio *old, struct folio *new); -void __swap_cache_clear_shadow(swp_entry_t entry, int nr_ents); =20 void show_swap_cache_info(void); void swapcache_clear(struct swap_info_struct *si, swp_entry_t entry, int n= r); diff --git a/mm/swap_state.c b/mm/swap_state.c index e7618ffe6d70..32d9d877bda8 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -350,27 +350,6 @@ void __swap_cache_replace_folio(struct swap_cluster_in= fo *ci, } } =20 -/** - * __swap_cache_clear_shadow - Clears a set of shadows in the swap cache. - * @entry: The starting index entry. - * @nr_ents: How many slots need to be cleared. - * - * Context: Caller must ensure the range is valid, all in one single clust= er, - * not occupied by any folio, and lock the cluster. - */ -void __swap_cache_clear_shadow(swp_entry_t entry, int nr_ents) -{ - struct swap_cluster_info *ci =3D __swap_entry_to_cluster(entry); - unsigned int ci_off =3D swp_cluster_offset(entry), ci_end; - unsigned long old; - - ci_end =3D ci_off + nr_ents; - do { - old =3D __swap_table_xchg(ci, ci_off, null_to_swp_tb()); - WARN_ON_ONCE(swp_tb_is_folio(old) || swp_tb_get_count(old)); - } while (++ci_off < ci_end); -} - /* * If we are the only user, then try to free up the swap cache. * diff --git a/mm/swapfile.c b/mm/swapfile.c index dab5e726855b..802efa37b33f 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -1287,7 +1287,6 @@ static void swap_range_alloc(struct swap_info_struct = *si, static void swap_range_free(struct swap_info_struct *si, unsigned long off= set, unsigned int nr_entries) { - unsigned long begin =3D offset; unsigned long end =3D offset + nr_entries - 1; void (*swap_slot_free_notify)(struct block_device *, unsigned long); unsigned int i; @@ -1312,7 +1311,6 @@ static void swap_range_free(struct swap_info_struct *= si, unsigned long offset, swap_slot_free_notify(si->bdev, offset); offset++; } - __swap_cache_clear_shadow(swp_entry(si->type, begin), nr_entries); =20 /* * Make sure that try_to_unuse() observes si->inuse_pages reaching 0 --=20 2.52.0