From nobody Tue Dec 16 13:21:44 2025 Received: from mail-pl1-f176.google.com (mail-pl1-f176.google.com [209.85.214.176]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AB2D234DCD8 for ; Thu, 4 Dec 2025 19:30:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.214.176 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764876637; cv=none; b=FJjFDVg2jxinaL9KRBDwxcHq9gzx1MUaa5cwRy8GSzXflnQMl5k3QOqAGZvgrI+AYJfJ6sCjiBYukPgRNLgWx4ZSabPq3uyh2I92gzVX0QjbbBGFReJeqCglQw/8jVIVytI5wkZRxbW7U57CZLbUeijmJYma7WaFQ26XMZN86wU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764876637; c=relaxed/simple; bh=nNcv92EqV7JtfYTPM3p1SMaWRc/TyxRJd9qC2/MPbHQ=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=KOTT1z86rkfTxBHmpDiauUOI0Pq3uEmzeqxAXxTnTgpuzgdvZoNQPbeX6SxBkv0QTe/epOzZXdqcXH1mGyWg22EEvg9YQC2xy1ipBHVdXeg/PC4wh+uzTGZIMgEuREQvZtzw1fqlB/pLPjJ9wjmJcHart6NHmLAWvjndH+0hKyw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=Qkql8zQt; arc=none smtp.client-ip=209.85.214.176 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="Qkql8zQt" Received: by mail-pl1-f176.google.com with SMTP id d9443c01a7336-29555b384acso13743165ad.1 for ; Thu, 04 Dec 2025 11:30:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1764876634; x=1765481434; darn=vger.kernel.org; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=tv4aqICLd9L4YLKBploMWc2xd03ojcYwn13tRVcUQXg=; b=Qkql8zQtkc1ns+sLzXFH3QkXWdQ54QuEYTo+z05V2ntNz5OH9PuyTASI+j9F08n61O k6M0m9YXYJdd2D2IPn7NosrNtQ3l8pNr17MCdJc7yzvosu3jAwSoaVdAHaB3NxSOAfFJ koLDIXIk9oRr5bQmTMtnGtkdaz+M/yHSZJY+rXEpi2lNxUc/kfjnzzLI2WBEHIqzWdT2 JIUEDlFOJGYVba0YMbdwKBpTXerOj9194JPncCKUkJes+chojkuK0gqIBW5tpX9Lyw9s +7ei9ZTnbwVx1rIeniMcvzpIh/+Q8MUv4COyDuicUVTAO+XvUlRWYXqczOL4x8yL+sSd d4ig== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1764876634; x=1765481434; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-gg:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=tv4aqICLd9L4YLKBploMWc2xd03ojcYwn13tRVcUQXg=; b=isL+Uffeb1QU03tC0lWXN9kH89CBCfm8rkHkX05oXj7OZy3CvK1+DfhflOFabmJBO/ OFnLCkD9/U8uWM0M7zybG5VI3xvTdthr+rIX1mICHLWpP1jM9VX3GrXpCjnZuG+uJA2M ft/Glp6lE6PIFYPCwkbUVmeN0oRHbkijMsFVrDv3PvDZ7N9NaLawrb9fBfhP0tuIWdTA l52bMppUX8Fr7XOckCsUvNP38N1QYutPAb2VV+i93sxnZzpECzAF4dNbLD8oAfHL47pK wfhFjRZlUpGlE55zkSsUEiW6UhaEHls9f2xsp61bjZzYc7TDMhBR+xxYI17lQ34NyuEE lDGw== X-Forwarded-Encrypted: i=1; AJvYcCXdgWfy6NpE2AKRnidftP+ylFS6dURnRkFPdea1ljidIADg91FsiNlm4DqExygqRMWPvBFblON4VlcY/JY=@vger.kernel.org X-Gm-Message-State: AOJu0Yy+FwukIi5+zzOjw9tgC2CWgDV5/Ayk/VwwOvMmzPrx6cX3Z8FG TJ8dGlIxH3TiQMoRRKsr3WKwxjWx1YFRWxKbmIWD+bk/6+bjPxMFHECg X-Gm-Gg: ASbGncsNYjoIS0O9Wv+EX1TXi/TcKmPLQiaxwzZF2K2ZgpBBMp9/kSqIuwJMRIaRP25 Oq0OtvISjbXemxE64EATfwylLLOVwkTq42UD5Q+uplvmxBaQOOabp9ybGWIlljiMpzyI+N6KI/b ETlUZM2A+Kd9buJ6RwXuvwLvVhuDr5cMNHwY0GByxA3SPkXyEu7YKj+Vp9l9wqqsIAfN7NPrXjT SOGdrNU0f7y4qkSJWxY7Wxo8yucET3npu38+NK1xmBy8Ziz9QXRximBopkM2bLR7VghXQtiD81N 8JLmS/nAclLr+xcD5ytwAc92Bi0uQcfLGTOecUWvuJygL+bgNpUhee0Z2xjOXXH9rvlLYjyvunJ GkABiWaCTN5wL8UDfE3DKi+Kifh3V8OsWk9enJ07bFHHKQ91KnpQyGy1WSl+vUxIIYydni94fwD poHi8iQ9GqZMSJx36AGMX1tfDaRurECsi+HMbaVn/2ei+Y9tYv X-Google-Smtp-Source: AGHT+IHXg3klf1bHAlWj25YlNRKccjBu8M+Fvq8/uOKLqB9fpvOKGlOKjcK7yS/s07jgXOUbSyhISw== X-Received: by 2002:a17:902:c408:b0:295:557e:746a with SMTP id d9443c01a7336-29d682ef5edmr91403145ad.13.1764876633812; Thu, 04 Dec 2025 11:30:33 -0800 (PST) Received: from [127.0.0.1] ([101.32.222.185]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-bf686b3b5a9sm2552926a12.9.2025.12.04.11.30.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 04 Dec 2025 11:30:33 -0800 (PST) From: Kairui Song Date: Fri, 05 Dec 2025 03:29:20 +0800 Subject: [PATCH v4 12/19] mm, swap: use swap cache as the swap in synchronize layer Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20251205-swap-table-p2-v4-12-cb7e28a26a40@tencent.com> References: <20251205-swap-table-p2-v4-0-cb7e28a26a40@tencent.com> In-Reply-To: <20251205-swap-table-p2-v4-0-cb7e28a26a40@tencent.com> To: linux-mm@kvack.org Cc: Andrew Morton , Baoquan He , Barry Song , Chris Li , Nhat Pham , Yosry Ahmed , David Hildenbrand , Johannes Weiner , Youngjun Park , Hugh Dickins , Baolin Wang , Ying Huang , Kemeng Shi , Lorenzo Stoakes , "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, Kairui Song X-Mailer: b4 0.14.3 X-Developer-Signature: v=1; a=ed25519-sha256; t=1764876574; l=14399; i=kasong@tencent.com; s=kasong-sign-tencent; h=from:subject:message-id; bh=k7XA6vA3z6eaz1At5HxTObmE+fBHx81TMOa8X6ZDU2c=; b=/r73yBSBbHZb0OA0OGJVs53E47Q6H9PNK4+1TyQTPPhXeZpRNFQXkWB7YdrsEBfJd/KviAoEw Wtj3Ry1Hk4ZAOP9V4qDgTRc7zVnNZ/EBWkahaeS4tSJXUXKMej+m4HK X-Developer-Key: i=kasong@tencent.com; a=ed25519; pk=kCdoBuwrYph+KrkJnrr7Sm1pwwhGDdZKcKrqiK8Y1mI= From: Kairui Song Current swap in synchronization mostly uses the swap_map's SWAP_HAS_CACHE bit. Whoever sets the bit first does the actual work to swap in a folio. This has been causing many issues as it's just a poor implementation of a bit lock. Raced users have no idea what is pinning a slot, so it has to loop with a schedule_timeout_uninterruptible(1), which is ugly and causes long-tailing or other performance issues. Besides, the abuse of SWAP_HAS_CACHE has been causing many other troubles for synchronization or maintenance. This is the first step to remove this bit completely. We have just removed all swap in paths that bypass the swap cache, and now both the swap cache and swap map are protected by the cluster lock. So now we can just resolve the swap synchronization with the swap cache layer directly using the cluster lock. Whoever inserts a folio in the swap cache first does the swap in work. And because folios are locked during swap operations, other raced users will just wait on the folio lock. The SWAP_HAS_CACHE will be removed in later commit. For now, we still set it for some remaining users. But now we do the bit setting and swap cache folio adding in the same critical section, after swap cache is ready. No one will have to spin on the SWAP_HAS_CACHE bit anymore. This both simplifies the logic and should improve the performance, eliminating issues like the one solved in commit 01626a1823024 ("mm: avoid unconditional one-tick sleep when swapcache_prepare fails"), or the "skip_if_exists" from commit a65b0e7607ccb ("zswap: make shrinking memcg-aware"), which will be removed very soon. Signed-off-by: Kairui Song --- include/linux/swap.h | 6 --- mm/swap.h | 15 +++++++- mm/swap_state.c | 105 ++++++++++++++++++++++++++++-------------------= ---- mm/swapfile.c | 39 ++++++++++++------- mm/vmscan.c | 1 - 5 files changed, 96 insertions(+), 70 deletions(-) diff --git a/include/linux/swap.h b/include/linux/swap.h index 936fa8f9e5f3..69025b473472 100644 --- a/include/linux/swap.h +++ b/include/linux/swap.h @@ -458,7 +458,6 @@ void put_swap_folio(struct folio *folio, swp_entry_t en= try); extern swp_entry_t get_swap_page_of_type(int); extern int add_swap_count_continuation(swp_entry_t, gfp_t); extern int swap_duplicate_nr(swp_entry_t entry, int nr); -extern int swapcache_prepare(swp_entry_t entry, int nr); extern void swap_free_nr(swp_entry_t entry, int nr_pages); extern void free_swap_and_cache_nr(swp_entry_t entry, int nr); int swap_type_of(dev_t device, sector_t offset); @@ -518,11 +517,6 @@ static inline int swap_duplicate_nr(swp_entry_t swp, i= nt nr_pages) return 0; } =20 -static inline int swapcache_prepare(swp_entry_t swp, int nr) -{ - return 0; -} - static inline void swap_free_nr(swp_entry_t entry, int nr_pages) { } diff --git a/mm/swap.h b/mm/swap.h index e0f05babe13a..b5075a1aee04 100644 --- a/mm/swap.h +++ b/mm/swap.h @@ -234,6 +234,14 @@ static inline bool folio_matches_swap_entry(const stru= ct folio *folio, return folio_entry.val =3D=3D round_down(entry.val, nr_pages); } =20 +/* Temporary internal helpers */ +void __swapcache_set_cached(struct swap_info_struct *si, + struct swap_cluster_info *ci, + swp_entry_t entry); +void __swapcache_clear_cached(struct swap_info_struct *si, + struct swap_cluster_info *ci, + swp_entry_t entry, unsigned int nr); + /* * All swap cache helpers below require the caller to ensure the swap entr= ies * used are valid and stablize the device by any of the following ways: @@ -247,7 +255,8 @@ static inline bool folio_matches_swap_entry(const struc= t folio *folio, */ struct folio *swap_cache_get_folio(swp_entry_t entry); void *swap_cache_get_shadow(swp_entry_t entry); -void swap_cache_add_folio(struct folio *folio, swp_entry_t entry, void **s= hadow); +int swap_cache_add_folio(struct folio *folio, swp_entry_t entry, + void **shadow, bool alloc); void swap_cache_del_folio(struct folio *folio); struct folio *swap_cache_alloc_folio(swp_entry_t entry, gfp_t gfp_flags, struct mempolicy *mpol, pgoff_t ilx, @@ -413,8 +422,10 @@ static inline void *swap_cache_get_shadow(swp_entry_t = entry) return NULL; } =20 -static inline void swap_cache_add_folio(struct folio *folio, swp_entry_t e= ntry, void **shadow) +static inline int swap_cache_add_folio(struct folio *folio, swp_entry_t en= try, + void **shadow, bool alloc) { + return -ENOENT; } =20 static inline void swap_cache_del_folio(struct folio *folio) diff --git a/mm/swap_state.c b/mm/swap_state.c index 0c5aad537716..df7df8b75e52 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -128,34 +128,64 @@ void *swap_cache_get_shadow(swp_entry_t entry) * @entry: The swap entry corresponding to the folio. * @gfp: gfp_mask for XArray node allocation. * @shadowp: If a shadow is found, return the shadow. + * @alloc: If it's the allocator that is trying to insert a folio. Allocat= or + * sets SWAP_HAS_CACHE to pin slots before insert so skip map upda= te. * * Context: Caller must ensure @entry is valid and protect the swap device * with reference count or locks. - * The caller also needs to update the corresponding swap_map slots with - * SWAP_HAS_CACHE bit to avoid race or conflict. */ -void swap_cache_add_folio(struct folio *folio, swp_entry_t entry, void **s= hadowp) +int swap_cache_add_folio(struct folio *folio, swp_entry_t entry, + void **shadowp, bool alloc) { + int err; void *shadow =3D NULL; + struct swap_info_struct *si; unsigned long old_tb, new_tb; struct swap_cluster_info *ci; - unsigned int ci_start, ci_off, ci_end; + unsigned int ci_start, ci_off, ci_end, offset; unsigned long nr_pages =3D folio_nr_pages(folio); =20 VM_WARN_ON_ONCE_FOLIO(!folio_test_locked(folio), folio); VM_WARN_ON_ONCE_FOLIO(folio_test_swapcache(folio), folio); VM_WARN_ON_ONCE_FOLIO(!folio_test_swapbacked(folio), folio); =20 + si =3D __swap_entry_to_info(entry); new_tb =3D folio_to_swp_tb(folio); ci_start =3D swp_cluster_offset(entry); ci_end =3D ci_start + nr_pages; ci_off =3D ci_start; - ci =3D swap_cluster_lock(__swap_entry_to_info(entry), swp_offset(entry)); + offset =3D swp_offset(entry); + ci =3D swap_cluster_lock(si, swp_offset(entry)); + if (unlikely(!ci->table)) { + err =3D -ENOENT; + goto failed; + } do { - old_tb =3D __swap_table_xchg(ci, ci_off, new_tb); - WARN_ON_ONCE(swp_tb_is_folio(old_tb)); + old_tb =3D __swap_table_get(ci, ci_off); + if (unlikely(swp_tb_is_folio(old_tb))) { + err =3D -EEXIST; + goto failed; + } + if (!alloc && unlikely(!__swap_count(swp_entry(swp_type(entry), offset))= )) { + err =3D -ENOENT; + goto failed; + } if (swp_tb_is_shadow(old_tb)) shadow =3D swp_tb_to_shadow(old_tb); + offset++; + } while (++ci_off < ci_end); + + ci_off =3D ci_start; + offset =3D swp_offset(entry); + do { + /* + * Still need to pin the slots with SWAP_HAS_CACHE since + * swap allocator depends on that. + */ + if (!alloc) + __swapcache_set_cached(si, ci, swp_entry(swp_type(entry), offset)); + __swap_table_set(ci, ci_off, new_tb); + offset++; } while (++ci_off < ci_end); =20 folio_ref_add(folio, nr_pages); @@ -168,6 +198,11 @@ void swap_cache_add_folio(struct folio *folio, swp_ent= ry_t entry, void **shadowp =20 if (shadowp) *shadowp =3D shadow; + return 0; + +failed: + swap_cluster_unlock(ci); + return err; } =20 /** @@ -186,6 +221,7 @@ void swap_cache_add_folio(struct folio *folio, swp_entr= y_t entry, void **shadowp void __swap_cache_del_folio(struct swap_cluster_info *ci, struct folio *fo= lio, swp_entry_t entry, void *shadow) { + struct swap_info_struct *si; unsigned long old_tb, new_tb; unsigned int ci_start, ci_off, ci_end; unsigned long nr_pages =3D folio_nr_pages(folio); @@ -195,6 +231,7 @@ void __swap_cache_del_folio(struct swap_cluster_info *c= i, struct folio *folio, VM_WARN_ON_ONCE_FOLIO(!folio_test_swapcache(folio), folio); VM_WARN_ON_ONCE_FOLIO(folio_test_writeback(folio), folio); =20 + si =3D __swap_entry_to_info(entry); new_tb =3D shadow_swp_to_tb(shadow); ci_start =3D swp_cluster_offset(entry); ci_end =3D ci_start + nr_pages; @@ -210,6 +247,7 @@ void __swap_cache_del_folio(struct swap_cluster_info *c= i, struct folio *folio, folio_clear_swapcache(folio); node_stat_mod_folio(folio, NR_FILE_PAGES, -nr_pages); lruvec_stat_mod_folio(folio, NR_SWAPCACHE, -nr_pages); + __swapcache_clear_cached(si, ci, entry, nr_pages); } =20 /** @@ -231,7 +269,6 @@ void swap_cache_del_folio(struct folio *folio) __swap_cache_del_folio(ci, folio, entry, NULL); swap_cluster_unlock(ci); =20 - put_swap_folio(folio, entry); folio_ref_sub(folio, folio_nr_pages(folio)); } =20 @@ -423,67 +460,37 @@ static struct folio *__swap_cache_prepare_and_add(swp= _entry_t entry, gfp_t gfp, bool charged, bool skip_if_exists) { - struct folio *swapcache; + struct folio *swapcache =3D NULL; void *shadow; int ret; =20 - /* - * Check and pin the swap map with SWAP_HAS_CACHE, then add the folio - * into the swap cache. Loop with a schedule delay if raced with - * another process setting SWAP_HAS_CACHE. This hackish loop will - * be fixed very soon. - */ + __folio_set_locked(folio); + __folio_set_swapbacked(folio); for (;;) { - ret =3D swapcache_prepare(entry, folio_nr_pages(folio)); + ret =3D swap_cache_add_folio(folio, entry, &shadow, false); if (!ret) break; =20 /* - * The skip_if_exists is for protecting against a recursive - * call to this helper on the same entry waiting forever - * here because SWAP_HAS_CACHE is set but the folio is not - * in the swap cache yet. This can happen today if - * mem_cgroup_swapin_charge_folio() below triggers reclaim - * through zswap, which may call this helper again in the - * writeback path. - * - * Large order allocation also needs special handling on + * Large order allocation needs special handling on * race: if a smaller folio exists in cache, swapin needs * to fallback to order 0, and doing a swap cache lookup * might return a folio that is irrelevant to the faulting * entry because @entry is aligned down. Just return NULL. */ if (ret !=3D -EEXIST || skip_if_exists || folio_test_large(folio)) - return NULL; + goto failed; =20 - /* - * Check the swap cache again, we can only arrive - * here because swapcache_prepare returns -EEXIST. - */ swapcache =3D swap_cache_get_folio(entry); if (swapcache) - return swapcache; - - /* - * We might race against __swap_cache_del_folio(), and - * stumble across a swap_map entry whose SWAP_HAS_CACHE - * has not yet been cleared. Or race against another - * swap_cache_alloc_folio(), which has set SWAP_HAS_CACHE - * in swap_map, but not yet added its folio to swap cache. - */ - schedule_timeout_uninterruptible(1); + goto failed; } =20 - __folio_set_locked(folio); - __folio_set_swapbacked(folio); - if (!charged && mem_cgroup_swapin_charge_folio(folio, NULL, gfp, entry)) { - put_swap_folio(folio, entry); - folio_unlock(folio); - return NULL; + swap_cache_del_folio(folio); + goto failed; } =20 - swap_cache_add_folio(folio, entry, &shadow); memcg1_swapin(entry, folio_nr_pages(folio)); if (shadow) workingset_refault(folio, shadow); @@ -491,6 +498,10 @@ static struct folio *__swap_cache_prepare_and_add(swp_= entry_t entry, /* Caller will initiate read into locked folio */ folio_add_lru(folio); return folio; + +failed: + folio_unlock(folio); + return swapcache; } =20 /** diff --git a/mm/swapfile.c b/mm/swapfile.c index d9d943fc7b8d..f7c0a9eb5f04 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -1476,7 +1476,11 @@ int folio_alloc_swap(struct folio *folio) if (!entry.val) return -ENOMEM; =20 - swap_cache_add_folio(folio, entry, NULL); + /* + * Allocator has pinned the slots with SWAP_HAS_CACHE + * so it should never fail + */ + WARN_ON_ONCE(swap_cache_add_folio(folio, entry, NULL, true)); =20 return 0; =20 @@ -1582,9 +1586,8 @@ static unsigned char swap_entry_put_locked(struct swa= p_info_struct *si, * do_swap_page() * ... swapoff+swapon * swap_cache_alloc_folio() - * swapcache_prepare() - * __swap_duplicate() - * // check swap_map + * swap_cache_add_folio() + * // check swap_map * // verify PTE not changed * * In __swap_duplicate(), the swap_map need to be checked before @@ -3768,17 +3771,25 @@ int swap_duplicate_nr(swp_entry_t entry, int nr) return err; } =20 -/* - * @entry: first swap entry from which we allocate nr swap cache. - * - * Called when allocating swap cache for existing swap entries, - * This can return error codes. Returns 0 at success. - * -EEXIST means there is a swap cache. - * Note: return code is different from swap_duplicate(). - */ -int swapcache_prepare(swp_entry_t entry, int nr) +/* Mark the swap map as HAS_CACHE, caller need to hold the cluster lock */ +void __swapcache_set_cached(struct swap_info_struct *si, + struct swap_cluster_info *ci, + swp_entry_t entry) +{ + WARN_ON(swap_dup_entries(si, ci, swp_offset(entry), SWAP_HAS_CACHE, 1)); +} + +/* Clear the swap map as !HAS_CACHE, caller need to hold the cluster lock = */ +void __swapcache_clear_cached(struct swap_info_struct *si, + struct swap_cluster_info *ci, + swp_entry_t entry, unsigned int nr) { - return __swap_duplicate(entry, SWAP_HAS_CACHE, nr); + if (swap_only_has_cache(si, swp_offset(entry), nr)) { + swap_entries_free(si, ci, entry, nr); + } else { + for (int i =3D 0; i < nr; i++, entry.val++) + swap_entry_put_locked(si, ci, entry, SWAP_HAS_CACHE); + } } =20 /* diff --git a/mm/vmscan.c b/mm/vmscan.c index 3b85652a42b9..9483267ebf70 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -761,7 +761,6 @@ static int __remove_mapping(struct address_space *mappi= ng, struct folio *folio, __swap_cache_del_folio(ci, folio, swap, shadow); memcg1_swapout(folio, swap); swap_cluster_unlock_irq(ci); - put_swap_folio(folio, swap); } else { void (*free_folio)(struct folio *); =20 --=20 2.52.0