From nobody Wed Feb 11 08:12:14 2026 Received: from mail-ot1-f43.google.com (mail-ot1-f43.google.com [209.85.210.43]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0574E35E530 for ; Sun, 8 Feb 2026 21:59:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.43 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770587944; cv=none; b=Vtp8nQPcZ5T9teZPEV8zHFG2Yd+ODsw47EJ1vuOXdwJqw1RILjaeWiJXhroDQRVtA8J4pZM/5IzioTHNZjS/KJyfeOLOoQ5MA/4E07nE7C8HuOpz0J2Wd/8g1UjjEPbmxIFrhiQnydX+2AaWuFyMJFLBnfLEDjLehkubfQzsSmY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1770587944; c=relaxed/simple; bh=7ZpeUy08Tt7waGoLPUK+KDH/WTBw71RG9IGuJ1VLi3Q=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Oqnfo84RictalgeFKaaU2LNVCO5Hsk624B3OFtUKtnG75R9H4tVFnu6AvmuJVqpZ7Nf6uKVs6xSwAWbARIYxziTu1tL3RVYCctHaNKedC8+UbAY+bKaeJkYTV3ncFzuVdGYkXebq1xH+52B12u8mHQckf9NO0WOvZfl+7b8+ju0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=SrlVNVNv; arc=none smtp.client-ip=209.85.210.43 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="SrlVNVNv" Received: by mail-ot1-f43.google.com with SMTP id 46e09a7af769-7d1866473b0so2190867a34.1 for ; Sun, 08 Feb 2026 13:59:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1770587943; x=1771192743; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=OU6FG/1ImjDgjh8K5anzkahxddqp677D6NQQqJ3zZIs=; b=SrlVNVNvok1opZNrAhrAHaz6xf+vnLfamAj6lRJohkoNWpURkxr6lfNZ3xsZlsz/IY dpLfIBD2LLV7fBPUP/mbZlHhtXLYIuGDRqcvKPSydLyFnxYIFVnx37RvcJ1tGt+LiHIB Zjv8tbjWw4EblgfoVgSIXORvzkZEl59LLV5b7w7VUC1RN0isfchgXVGtGE7NAhrM6lL8 vfngYznfGqU0g4uw2KGecfOlmTOj2sP0OzNCqxnRzbBu0yDuXXGSzJ5f7F9SZ/+EEnTV eVt4a5yRLWB1EjZrAPXR+Li+SZynt64+0m0QlALTwaF8TLgwCrpiqpWO0UMFTZn4f68k 6r+Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1770587943; x=1771192743; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=OU6FG/1ImjDgjh8K5anzkahxddqp677D6NQQqJ3zZIs=; b=bWX9DH1CDS7mO4UnngqtUq56/Otl3+DLOB9qcdIUS6lQhP0T7cc8nsWCZa0RxTcyfQ 9A96NNBnLc49IK6rLRXSx+Uw1FlcY1r+YKr+UPQEPQV3EkugYx2a/PsT7fVAZ1Ws7ETf RWPiiCIQEzFwcgGSHFzgdIFJYnrrlsltwG3Jivb4ZTujiiiH8PaezsnuGQKMevN139mC 4+QU3w+bqdpc5Vow9p9wNOzrTr8i0AnfjDEKGoo5QDbs9lyXlcwl/gyi1eRQca4cso4o RyETE40ecEreGX4unmQL0mxGAdU52QXbF/zzeWC0XUUeoDBy9ffKiB9NCxKEX6iEV1Fr os8A== X-Forwarded-Encrypted: i=1; AJvYcCVU5eQ2bf+tkBo6On7n61jo/SeyUTyFzcp9pS0WM7yj4A6H+oBpb8fC86TUY0FycDi7XyHKz946/bXZiYo=@vger.kernel.org X-Gm-Message-State: AOJu0YxED35Y1z5YmnzWjnnbfS/7OSMk883V1O1IxGCjHKySmrHYbz8g iXqb+j1VPrBP01brozN8VNa83KPQ4GTHGF8sE/Limps5V8CS3kBg+yC+ X-Gm-Gg: AZuq6aLEwGc8W7pmDPLBQFwXs4K4o9mBxNinIMna5/0DAmu4BvTcbI225qgD6BL5HNw 4m3YjAennX2gKl9HIK7dxbmy9GSCe0x6kCssJgzaN6I/evVGZcX7VDe/+zfdECicDKqOosw1wzg vHgHe3+ThEPN0JS6EdJuIUNLZmWlejA21gR92IEXuZLxZ2Ay9DOnICWaRjjT24M+kNAqePn17kz TTvvjj7KPFge5JPn2jYammuTd0GtzDoMf7nenJPmvcqyHVmwN5oC2lDa2Q3kQE/P4H5guaIfsKB jpLPhIMP/c4LXiRPVkn+7/uoDMsWlAPiCJ5jo1EazPJzF13bIUe+95Vx124Ez1DABNRH1o2T50/ ySTFd+iCeVE/WcYm75tTvE+0uQVZYgXevZOpJ6aQVVPj51FVtxXsDs2uLBLQuDAjyYd1ztViFeH eBx4KkdkysXp+6fYwTOQ66B+rjwvEU1O5Q+Q== X-Received: by 2002:a05:6830:6518:b0:7c7:8280:9207 with SMTP id 46e09a7af769-7d464689b0amr5463329a34.37.1770587942713; Sun, 08 Feb 2026 13:59:02 -0800 (PST) Received: from localhost ([2a03:2880:10ff:53::]) by smtp.gmail.com with ESMTPSA id 46e09a7af769-7d464785f68sm6051602a34.16.2026.02.08.13.59.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 08 Feb 2026 13:59:02 -0800 (PST) From: Nhat Pham To: linux-mm@kvack.org Cc: akpm@linux-foundation.org, hannes@cmpxchg.org, hughd@google.com, yosry.ahmed@linux.dev, mhocko@kernel.org, roman.gushchin@linux.dev, shakeel.butt@linux.dev, muchun.song@linux.dev, len.brown@intel.com, chengming.zhou@linux.dev, kasong@tencent.com, chrisl@kernel.org, huang.ying.caritas@gmail.com, ryan.roberts@arm.com, shikemeng@huaweicloud.com, viro@zeniv.linux.org.uk, baohua@kernel.org, bhe@redhat.com, osalvador@suse.de, lorenzo.stoakes@oracle.com, christophe.leroy@csgroup.eu, pavel@kernel.org, kernel-team@meta.com, linux-kernel@vger.kernel.org, cgroups@vger.kernel.org, linux-pm@vger.kernel.org, peterx@redhat.com, riel@surriel.com, joshua.hahnjy@gmail.com, npache@redhat.com, gourry@gourry.net, axelrasmussen@google.com, yuanchu@google.com, weixugc@google.com, rafael@kernel.org, jannh@google.com, pfalcato@suse.de, zhengqi.arch@bytedance.com Subject: [PATCH v3 10/20] swap: move swap cache to virtual swap descriptor Date: Sun, 8 Feb 2026 13:58:23 -0800 Message-ID: <20260208215839.87595-11-nphamcs@gmail.com> X-Mailer: git-send-email 2.47.3 In-Reply-To: <20260208215839.87595-1-nphamcs@gmail.com> References: <20260208215839.87595-1-nphamcs@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Move the swap cache (and workingset shadow for anonymous pages) to the virtual swap descriptor. This effectively range-partitions the swap cache by virtual swap clusters (of PMD sized), eliminate swap cache lock contention. Signed-off-by: Nhat Pham --- mm/huge_memory.c | 4 +- mm/migrate.c | 6 +- mm/shmem.c | 4 +- mm/swap.h | 16 +-- mm/swap_state.c | 251 +-------------------------------- mm/vmscan.c | 6 +- mm/vswap.c | 350 ++++++++++++++++++++++++++++++++++++++++++++++- 7 files changed, 364 insertions(+), 273 deletions(-) diff --git a/mm/huge_memory.c b/mm/huge_memory.c index 21215ac870144..dcbd3821d6178 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -3825,7 +3825,7 @@ static int __folio_freeze_and_split_unmapped(struct f= olio *folio, unsigned int n return -EINVAL; } =20 - swap_cache_lock(); + swap_cache_lock(folio->swap); } =20 /* lock lru list/PageCompound, ref frozen by page_ref_freeze */ @@ -3901,7 +3901,7 @@ static int __folio_freeze_and_split_unmapped(struct f= olio *folio, unsigned int n unlock_page_lruvec(lruvec); =20 if (folio_test_swapcache(folio)) - swap_cache_unlock(); + swap_cache_unlock(folio->swap); } else { split_queue_unlock(ds_queue); return -EAGAIN; diff --git a/mm/migrate.c b/mm/migrate.c index 11d9b43dff5d8..e850b05a232de 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -600,13 +600,13 @@ static int __folio_migrate_mapping(struct address_spa= ce *mapping, newzone =3D folio_zone(newfolio); =20 if (folio_test_swapcache(folio)) - swap_cache_lock_irq(); + swap_cache_lock_irq(folio->swap); else xas_lock_irq(&xas); =20 if (!folio_ref_freeze(folio, expected_count)) { if (folio_test_swapcache(folio)) - swap_cache_unlock_irq(); + swap_cache_unlock_irq(folio->swap); else xas_unlock_irq(&xas); return -EAGAIN; @@ -652,7 +652,7 @@ static int __folio_migrate_mapping(struct address_space= *mapping, =20 /* Leave irq disabled to prevent preemption while updating stats */ if (folio_test_swapcache(folio)) - swap_cache_unlock(); + swap_cache_unlock(folio->swap); else xas_unlock(&xas); =20 diff --git a/mm/shmem.c b/mm/shmem.c index 13f7469a04c8a..66cf8af6779ca 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -2168,12 +2168,12 @@ static int shmem_replace_folio(struct folio **folio= p, gfp_t gfp, new->swap =3D entry; folio_set_swapcache(new); =20 - swap_cache_lock_irq(); + swap_cache_lock_irq(entry); __swap_cache_replace_folio(old, new); mem_cgroup_replace_folio(old, new); shmem_update_stats(new, nr_pages); shmem_update_stats(old, -nr_pages); - swap_cache_unlock_irq(); + swap_cache_unlock_irq(entry); =20 folio_add_lru(new); *foliop =3D new; diff --git a/mm/swap.h b/mm/swap.h index 5eb53758bbd5d..57ed24a2d6356 100644 --- a/mm/swap.h +++ b/mm/swap.h @@ -205,10 +205,12 @@ void __swap_writepage(struct folio *folio, struct swa= p_iocb **swap_plug); =20 /* linux/mm/swap_state.c */ extern struct address_space swap_space __read_mostly; -void swap_cache_lock_irq(void); -void swap_cache_unlock_irq(void); -void swap_cache_lock(void); -void swap_cache_unlock(void); + +/* linux/mm/vswap.c */ +void swap_cache_lock_irq(swp_entry_t entry); +void swap_cache_unlock_irq(swp_entry_t entry); +void swap_cache_lock(swp_entry_t entry); +void swap_cache_unlock(swp_entry_t entry); =20 static inline struct address_space *swap_address_space(swp_entry_t entry) { @@ -256,12 +258,11 @@ static inline bool folio_matches_swap_entry(const str= uct folio *folio, */ struct folio *swap_cache_get_folio(swp_entry_t entry); void *swap_cache_get_shadow(swp_entry_t entry); -int swap_cache_add_folio(struct folio *folio, swp_entry_t entry, gfp_t gfp= , void **shadow); +void swap_cache_add_folio(struct folio *folio, swp_entry_t entry, void **s= hadow); void swap_cache_del_folio(struct folio *folio); /* Below helpers require the caller to lock the swap cache. */ void __swap_cache_del_folio(struct folio *folio, swp_entry_t entry, void *= shadow); void __swap_cache_replace_folio(struct folio *old, struct folio *new); -void swap_cache_clear_shadow(swp_entry_t entry, int nr_ents); =20 void show_swap_cache_info(void); void swapcache_clear(struct swap_info_struct *si, swp_entry_t entry, int n= r); @@ -422,9 +423,8 @@ static inline void *swap_cache_get_shadow(swp_entry_t e= ntry) return NULL; } =20 -static inline int swap_cache_add_folio(struct folio *folio, swp_entry_t en= try, gfp_t gfp, void **shadow) +static inline void swap_cache_add_folio(struct folio *folio, swp_entry_t e= ntry, void **shadow) { - return 0; } =20 static inline void swap_cache_del_folio(struct folio *folio) diff --git a/mm/swap_state.c b/mm/swap_state.c index c5ceccd756699..00fa3e76a5c19 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -41,28 +41,6 @@ struct address_space swap_space __read_mostly =3D { .a_ops =3D &swap_aops, }; =20 -static DEFINE_XARRAY(swap_cache); - -void swap_cache_lock_irq(void) -{ - xa_lock_irq(&swap_cache); -} - -void swap_cache_unlock_irq(void) -{ - xa_unlock_irq(&swap_cache); -} - -void swap_cache_lock(void) -{ - xa_lock(&swap_cache); -} - -void swap_cache_unlock(void) -{ - xa_unlock(&swap_cache); -} - static bool enable_vma_readahead __read_mostly =3D true; =20 #define SWAP_RA_ORDER_CEILING 5 @@ -94,231 +72,6 @@ void show_swap_cache_info(void) printk("Total swap =3D %lukB\n", K(total_swap_pages)); } =20 -/** - * swap_cache_get_folio - Looks up a folio in the swap cache. - * @entry: swap entry used for the lookup. - * - * A found folio will be returned unlocked and with its refcount increased. - * - * Context: Caller must ensure @entry is valid and protect the swap device - * with reference count or locks. - * Return: Returns the found folio on success, NULL otherwise. The caller - * must lock nd check if the folio still matches the swap entry before - * use (e.g., folio_matches_swap_entry). - */ -struct folio *swap_cache_get_folio(swp_entry_t entry) -{ - void *entry_val; - struct folio *folio; - - for (;;) { - rcu_read_lock(); - entry_val =3D xa_load(&swap_cache, entry.val); - if (!entry_val || xa_is_value(entry_val)) { - rcu_read_unlock(); - return NULL; - } - folio =3D entry_val; - if (likely(folio_try_get(folio))) { - rcu_read_unlock(); - return folio; - } - rcu_read_unlock(); - } - - return NULL; -} - -/** - * swap_cache_get_shadow - Looks up a shadow in the swap cache. - * @entry: swap entry used for the lookup. - * - * Context: Caller must ensure @entry is valid and protect the swap device - * with reference count or locks. - * Return: Returns either NULL or an XA_VALUE (shadow). - */ -void *swap_cache_get_shadow(swp_entry_t entry) -{ - void *entry_val; - - rcu_read_lock(); - entry_val =3D xa_load(&swap_cache, entry.val); - rcu_read_unlock(); - - if (xa_is_value(entry_val)) - return entry_val; - return NULL; -} - -/** - * swap_cache_add_folio - Add a folio into the swap cache. - * @folio: The folio to be added. - * @entry: The swap entry corresponding to the folio. - * @gfp: gfp_mask for XArray node allocation. - * @shadowp: If a shadow is found, return the shadow. - * - * Context: Caller must ensure @entry is valid and protect the swap device - * with reference count or locks. - * The caller also needs to update the corresponding swap_map slots with - * SWAP_HAS_CACHE bit to avoid race or conflict. - * - * Return: 0 on success, negative error code on failure. - */ -int swap_cache_add_folio(struct folio *folio, swp_entry_t entry, gfp_t gfp= , void **shadowp) -{ - XA_STATE_ORDER(xas, &swap_cache, entry.val, folio_order(folio)); - unsigned long nr_pages =3D folio_nr_pages(folio); - unsigned long i; - void *old; - - VM_WARN_ON_ONCE_FOLIO(!folio_test_locked(folio), folio); - VM_WARN_ON_ONCE_FOLIO(folio_test_swapcache(folio), folio); - VM_WARN_ON_ONCE_FOLIO(!folio_test_swapbacked(folio), folio); - - folio_ref_add(folio, nr_pages); - folio_set_swapcache(folio); - folio->swap =3D entry; - - do { - xas_lock_irq(&xas); - xas_create_range(&xas); - if (xas_error(&xas)) - goto unlock; - for (i =3D 0; i < nr_pages; i++) { - VM_BUG_ON_FOLIO(xas.xa_index !=3D entry.val + i, folio); - old =3D xas_load(&xas); - if (old && !xa_is_value(old)) { - VM_WARN_ON_ONCE_FOLIO(1, folio); - xas_set_err(&xas, -EEXIST); - goto unlock; - } - if (shadowp && xa_is_value(old) && !*shadowp) - *shadowp =3D old; - xas_store(&xas, folio); - xas_next(&xas); - } - node_stat_mod_folio(folio, NR_FILE_PAGES, nr_pages); - lruvec_stat_mod_folio(folio, NR_SWAPCACHE, nr_pages); -unlock: - xas_unlock_irq(&xas); - } while (xas_nomem(&xas, gfp)); - - if (!xas_error(&xas)) - return 0; - - folio_clear_swapcache(folio); - folio_ref_sub(folio, nr_pages); - return xas_error(&xas); -} - -/** - * __swap_cache_del_folio - Removes a folio from the swap cache. - * @folio: The folio. - * @entry: The first swap entry that the folio corresponds to. - * @shadow: shadow value to be filled in the swap cache. - * - * Removes a folio from the swap cache and fills a shadow in place. - * This won't put the folio's refcount. The caller has to do that. - * - * Context: Caller must ensure the folio is locked and in the swap cache - * using the index of @entry, and lock the swap cache xarray. - */ -void __swap_cache_del_folio(struct folio *folio, swp_entry_t entry, void *= shadow) -{ - long nr_pages =3D folio_nr_pages(folio); - XA_STATE(xas, &swap_cache, entry.val); - int i; - - VM_WARN_ON_ONCE_FOLIO(!folio_test_locked(folio), folio); - VM_WARN_ON_ONCE_FOLIO(!folio_test_swapcache(folio), folio); - VM_WARN_ON_ONCE_FOLIO(folio_test_writeback(folio), folio); - - for (i =3D 0; i < nr_pages; i++) { - void *old =3D xas_store(&xas, shadow); - VM_WARN_ON_FOLIO(old !=3D folio, folio); - xas_next(&xas); - } - - folio->swap.val =3D 0; - folio_clear_swapcache(folio); - node_stat_mod_folio(folio, NR_FILE_PAGES, -nr_pages); - lruvec_stat_mod_folio(folio, NR_SWAPCACHE, -nr_pages); -} - -/** - * swap_cache_del_folio - Removes a folio from the swap cache. - * @folio: The folio. - * - * Same as __swap_cache_del_folio, but handles lock and refcount. The - * caller must ensure the folio is either clean or has a swap count - * equal to zero, or it may cause data loss. - * - * Context: Caller must ensure the folio is locked and in the swap cache. - */ -void swap_cache_del_folio(struct folio *folio) -{ - swp_entry_t entry =3D folio->swap; - - xa_lock_irq(&swap_cache); - __swap_cache_del_folio(folio, entry, NULL); - xa_unlock_irq(&swap_cache); - - put_swap_folio(folio, entry); - folio_ref_sub(folio, folio_nr_pages(folio)); -} - -/** - * __swap_cache_replace_folio - Replace a folio in the swap cache. - * @old: The old folio to be replaced. - * @new: The new folio. - * - * Replace an existing folio in the swap cache with a new folio. The - * caller is responsible for setting up the new folio's flag and swap - * entries. Replacement will take the new folio's swap entry value as - * the starting offset to override all slots covered by the new folio. - * - * Context: Caller must ensure both folios are locked, and lock the - * swap cache xarray. - */ -void __swap_cache_replace_folio(struct folio *old, struct folio *new) -{ - swp_entry_t entry =3D new->swap; - unsigned long nr_pages =3D folio_nr_pages(new); - XA_STATE(xas, &swap_cache, entry.val); - int i; - - VM_WARN_ON_ONCE(!folio_test_swapcache(old) || !folio_test_swapcache(new)); - VM_WARN_ON_ONCE(!folio_test_locked(old) || !folio_test_locked(new)); - VM_WARN_ON_ONCE(!entry.val); - - for (i =3D 0; i < nr_pages; i++) { - void *old_entry =3D xas_store(&xas, new); - WARN_ON_ONCE(!old_entry || xa_is_value(old_entry) || old_entry !=3D old); - xas_next(&xas); - } -} - -/** - * swap_cache_clear_shadow - Clears a set of shadows in the swap cache. - * @entry: The starting index entry. - * @nr_ents: How many slots need to be cleared. - * - * Context: Caller must ensure the range is valid and all in one single cl= uster, - * not occupied by any folio. - */ -void swap_cache_clear_shadow(swp_entry_t entry, int nr_ents) -{ - XA_STATE(xas, &swap_cache, entry.val); - int i; - - xas_lock(&xas); - for (i =3D 0; i < nr_ents; i++) { - xas_store(&xas, NULL); - xas_next(&xas); - } - xas_unlock(&xas); -} - /* * If we are the only user, then try to free up the swap cache. * @@ -497,9 +250,7 @@ struct folio *__read_swap_cache_async(swp_entry_t entry= , gfp_t gfp_mask, if (mem_cgroup_swapin_charge_folio(new_folio, NULL, gfp_mask, entry)) goto fail_unlock; =20 - /* May fail (-ENOMEM) if XArray node allocation failed. */ - if (swap_cache_add_folio(new_folio, entry, gfp_mask & GFP_RECLAIM_MASK, &= shadow)) - goto fail_unlock; + swap_cache_add_folio(new_folio, entry, &shadow); =20 memcg1_swapin(entry, 1); =20 diff --git a/mm/vmscan.c b/mm/vmscan.c index 558ff7f413786..c9ec1a1458b4e 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -712,7 +712,7 @@ static int __remove_mapping(struct address_space *mappi= ng, struct folio *folio, BUG_ON(mapping !=3D folio_mapping(folio)); =20 if (folio_test_swapcache(folio)) { - swap_cache_lock_irq(); + swap_cache_lock_irq(folio->swap); } else { spin_lock(&mapping->host->i_lock); xa_lock_irq(&mapping->i_pages); @@ -759,7 +759,7 @@ static int __remove_mapping(struct address_space *mappi= ng, struct folio *folio, shadow =3D workingset_eviction(folio, target_memcg); __swap_cache_del_folio(folio, swap, shadow); memcg1_swapout(folio, swap); - swap_cache_unlock_irq(); + swap_cache_unlock_irq(swap); put_swap_folio(folio, swap); } else { void (*free_folio)(struct folio *); @@ -798,7 +798,7 @@ static int __remove_mapping(struct address_space *mappi= ng, struct folio *folio, =20 cannot_free: if (folio_test_swapcache(folio)) { - swap_cache_unlock_irq(); + swap_cache_unlock_irq(folio->swap); } else { xa_unlock_irq(&mapping->i_pages); spin_unlock(&mapping->host->i_lock); diff --git a/mm/vswap.c b/mm/vswap.c index 9aa95558f320a..d44199dc059a3 100644 --- a/mm/vswap.c +++ b/mm/vswap.c @@ -37,9 +37,15 @@ * Swap descriptor - metadata of a swapped out page. * * @slot: The handle to the physical swap slot backing this page. + * @swap_cache: The folio in swap cache. + * @shadow: The shadow entry. */ struct swp_desc { swp_slot_t slot; + union { + struct folio *swap_cache; + void *shadow; + }; }; =20 #define VSWAP_CLUSTER_SHIFT HPAGE_PMD_ORDER @@ -170,6 +176,24 @@ static int vswap_debug_fs_init(void) } #endif =20 +/* + * Lockless version of vswap_iter - assumes caller holds cluster lock. + * Used when iterating within the same cluster with the lock already held. + */ +static struct swp_desc *__vswap_iter(struct vswap_cluster *cluster, unsign= ed long i) +{ + unsigned long slot_index; + + lockdep_assert_held(&cluster->lock); + VM_WARN_ON(cluster->id !=3D VSWAP_VAL_CLUSTER_IDX(i)); + + slot_index =3D VSWAP_IDX_WITHIN_CLUSTER_VAL(i); + if (test_bit(slot_index, cluster->bitmap)) + return &cluster->descriptors[slot_index]; + + return NULL; +} + static struct swp_desc *vswap_iter(struct vswap_cluster **clusterp, unsign= ed long i) { unsigned long cluster_id =3D VSWAP_VAL_CLUSTER_IDX(i); @@ -448,7 +472,6 @@ void vswap_free(swp_entry_t entry, struct swap_cluster_= info *ci) if (!entry.val) return; =20 - swap_cache_clear_shadow(entry, 1); zswap_invalidate(entry); mem_cgroup_uncharge_swap(entry, 1); =20 @@ -460,6 +483,10 @@ void vswap_free(swp_entry_t entry, struct swap_cluster= _info *ci) return; } =20 + /* Clear shadow if present */ + if (xa_is_value(desc->shadow)) + desc->shadow =3D NULL; + if (desc->slot.val) vswap_rmap_set(ci, desc->slot, 0, 1); =20 @@ -480,7 +507,7 @@ int folio_alloc_swap(struct folio *folio) struct vswap_cluster *cluster =3D NULL; struct swap_info_struct *si; struct swap_cluster_info *ci; - int i, err, nr =3D folio_nr_pages(folio), order =3D folio_order(folio); + int i, nr =3D folio_nr_pages(folio), order =3D folio_order(folio); struct swp_desc *desc; swp_entry_t entry; swp_slot_t slot; @@ -533,9 +560,7 @@ int folio_alloc_swap(struct folio *folio) if (mem_cgroup_try_charge_swap(folio, entry)) goto out_free; =20 - err =3D swap_cache_add_folio(folio, entry, __GFP_HIGH | __GFP_NOMEMALLOC = | __GFP_NOWARN, NULL); - if (err) - goto out_free; + swap_cache_add_folio(folio, entry, NULL); =20 return 0; =20 @@ -668,6 +693,321 @@ static int vswap_cpu_dead(unsigned int cpu) return 0; } =20 +/** + * swap_cache_lock - lock the swap cache for a swap entry + * @entry: the swap entry + * + * Locks the vswap cluster spinlock for the given swap entry. + */ +void swap_cache_lock(swp_entry_t entry) +{ + struct vswap_cluster *cluster; + unsigned long cluster_id =3D VSWAP_CLUSTER_IDX(entry); + + rcu_read_lock(); + cluster =3D xa_load(&vswap_cluster_map, cluster_id); + VM_WARN_ON(!cluster); + spin_lock(&cluster->lock); + rcu_read_unlock(); +} + +/** + * swap_cache_unlock - unlock the swap cache for a swap entry + * @entry: the swap entry + * + * Unlocks the vswap cluster spinlock for the given swap entry. + */ +void swap_cache_unlock(swp_entry_t entry) +{ + struct vswap_cluster *cluster; + unsigned long cluster_id =3D VSWAP_CLUSTER_IDX(entry); + + rcu_read_lock(); + cluster =3D xa_load(&vswap_cluster_map, cluster_id); + VM_WARN_ON(!cluster); + spin_unlock(&cluster->lock); + rcu_read_unlock(); +} + +/** + * swap_cache_lock_irq - lock the swap cache with interrupts disabled + * @entry: the swap entry + * + * Locks the vswap cluster spinlock and disables interrupts for the given = swap entry. + */ +void swap_cache_lock_irq(swp_entry_t entry) +{ + struct vswap_cluster *cluster; + unsigned long cluster_id =3D VSWAP_CLUSTER_IDX(entry); + + rcu_read_lock(); + cluster =3D xa_load(&vswap_cluster_map, cluster_id); + VM_WARN_ON(!cluster); + spin_lock_irq(&cluster->lock); + rcu_read_unlock(); +} + +/** + * swap_cache_unlock_irq - unlock the swap cache with interrupts enabled + * @entry: the swap entry + * + * Unlocks the vswap cluster spinlock and enables interrupts for the given= swap entry. + */ +void swap_cache_unlock_irq(swp_entry_t entry) +{ + struct vswap_cluster *cluster; + unsigned long cluster_id =3D VSWAP_CLUSTER_IDX(entry); + + rcu_read_lock(); + cluster =3D xa_load(&vswap_cluster_map, cluster_id); + VM_WARN_ON(!cluster); + spin_unlock_irq(&cluster->lock); + rcu_read_unlock(); +} + +/** + * swap_cache_get_folio - Looks up a folio in the swap cache. + * @entry: swap entry used for the lookup. + * + * A found folio will be returned unlocked and with its refcount increased. + * + * Context: Caller must ensure @entry is valid and protect the cluster with + * reference count or locks. + * + * Return: Returns the found folio on success, NULL otherwise. The caller + * must lock and check if the folio still matches the swap entry before + * use (e.g., folio_matches_swap_entry). + */ +struct folio *swap_cache_get_folio(swp_entry_t entry) +{ + struct vswap_cluster *cluster =3D NULL; + struct swp_desc *desc; + struct folio *folio; + + for (;;) { + rcu_read_lock(); + desc =3D vswap_iter(&cluster, entry.val); + if (!desc) { + rcu_read_unlock(); + return NULL; + } + + /* Check if this is a shadow value (xa_is_value equivalent) */ + if (xa_is_value(desc->shadow)) { + spin_unlock(&cluster->lock); + rcu_read_unlock(); + return NULL; + } + + folio =3D desc->swap_cache; + if (!folio) { + spin_unlock(&cluster->lock); + rcu_read_unlock(); + return NULL; + } + + if (likely(folio_try_get(folio))) { + spin_unlock(&cluster->lock); + rcu_read_unlock(); + return folio; + } + spin_unlock(&cluster->lock); + rcu_read_unlock(); + } + + return NULL; +} + +/** + * swap_cache_get_shadow - Looks up a shadow in the swap cache. + * @entry: swap entry used for the lookup. + * + * Context: Caller must ensure @entry is valid and protect the cluster with + * reference count or locks. + * + * Return: Returns either NULL or an XA_VALUE (shadow). + */ +void *swap_cache_get_shadow(swp_entry_t entry) +{ + struct vswap_cluster *cluster =3D NULL; + struct swp_desc *desc; + void *shadow; + + rcu_read_lock(); + desc =3D vswap_iter(&cluster, entry.val); + if (!desc) { + rcu_read_unlock(); + return NULL; + } + + shadow =3D desc->shadow; + spin_unlock(&cluster->lock); + rcu_read_unlock(); + + if (xa_is_value(shadow)) + return shadow; + return NULL; +} + +/** + * swap_cache_add_folio - Add a folio into the swap cache. + * @folio: The folio to be added. + * @entry: The swap entry corresponding to the folio. + * @shadowp: If a shadow is found, return the shadow. + * + * Context: Caller must ensure @entry is valid and protect the cluster with + * reference count or locks. + * + * The caller also needs to update the corresponding swap_map slots with + * SWAP_HAS_CACHE bit to avoid race or conflict. + */ +void swap_cache_add_folio(struct folio *folio, swp_entry_t entry, void **s= hadowp) +{ + struct vswap_cluster *cluster; + unsigned long nr_pages =3D folio_nr_pages(folio); + unsigned long cluster_id =3D VSWAP_CLUSTER_IDX(entry); + unsigned long i; + struct swp_desc *desc; + void *old; + + VM_WARN_ON_ONCE_FOLIO(!folio_test_locked(folio), folio); + VM_WARN_ON_ONCE_FOLIO(folio_test_swapcache(folio), folio); + VM_WARN_ON_ONCE_FOLIO(!folio_test_swapbacked(folio), folio); + + folio_ref_add(folio, nr_pages); + folio_set_swapcache(folio); + folio->swap =3D entry; + + rcu_read_lock(); + cluster =3D xa_load(&vswap_cluster_map, cluster_id); + VM_WARN_ON(!cluster); + spin_lock_irq(&cluster->lock); + + for (i =3D 0; i < nr_pages; i++) { + desc =3D __vswap_iter(cluster, entry.val + i); + VM_WARN_ON(!desc); + old =3D desc->shadow; + + /* Warn if slot is already occupied by a folio */ + VM_WARN_ON_FOLIO(old && !xa_is_value(old), folio); + + /* Save shadow if found and not yet saved */ + if (shadowp && xa_is_value(old) && !*shadowp) + *shadowp =3D old; + + desc->swap_cache =3D folio; + } + + spin_unlock_irq(&cluster->lock); + rcu_read_unlock(); + + node_stat_mod_folio(folio, NR_FILE_PAGES, nr_pages); + lruvec_stat_mod_folio(folio, NR_SWAPCACHE, nr_pages); +} + +/** + * __swap_cache_del_folio - Removes a folio from the swap cache. + * @folio: The folio. + * @entry: The first swap entry that the folio corresponds to. + * @shadow: shadow value to be filled in the swap cache. + * + * Removes a folio from the swap cache and fills a shadow in place. + * This won't put the folio's refcount. The caller has to do that. + * + * Context: Caller must ensure the folio is locked and in the swap cache + * using the index of @entry, and lock the swap cache. + */ +void __swap_cache_del_folio(struct folio *folio, swp_entry_t entry, void *= shadow) +{ + long nr_pages =3D folio_nr_pages(folio); + struct vswap_cluster *cluster; + struct swp_desc *desc; + unsigned long cluster_id =3D VSWAP_CLUSTER_IDX(entry); + int i; + + VM_WARN_ON_ONCE_FOLIO(!folio_test_locked(folio), folio); + VM_WARN_ON_ONCE_FOLIO(!folio_test_swapcache(folio), folio); + VM_WARN_ON_ONCE_FOLIO(folio_test_writeback(folio), folio); + + rcu_read_lock(); + cluster =3D xa_load(&vswap_cluster_map, cluster_id); + VM_WARN_ON(!cluster); + + for (i =3D 0; i < nr_pages; i++) { + desc =3D __vswap_iter(cluster, entry.val + i); + VM_WARN_ON_FOLIO(!desc || desc->swap_cache !=3D folio, folio); + desc->shadow =3D shadow; + } + rcu_read_unlock(); + + folio->swap.val =3D 0; + folio_clear_swapcache(folio); + node_stat_mod_folio(folio, NR_FILE_PAGES, -nr_pages); + lruvec_stat_mod_folio(folio, NR_SWAPCACHE, -nr_pages); +} + +/** + * swap_cache_del_folio - Removes a folio from the swap cache. + * @folio: The folio. + * + * Same as __swap_cache_del_folio, but handles lock and refcount. The + * caller must ensure the folio is either clean or has a swap count + * equal to zero, or it may cause data loss. + * + * Context: Caller must ensure the folio is locked and in the swap cache. + */ +void swap_cache_del_folio(struct folio *folio) +{ + swp_entry_t entry =3D folio->swap; + + swap_cache_lock_irq(entry); + __swap_cache_del_folio(folio, entry, NULL); + swap_cache_unlock_irq(entry); + + put_swap_folio(folio, entry); + folio_ref_sub(folio, folio_nr_pages(folio)); +} + +/** + * __swap_cache_replace_folio - Replace a folio in the swap cache. + * @old: The old folio to be replaced. + * @new: The new folio. + * + * Replace an existing folio in the swap cache with a new folio. The + * caller is responsible for setting up the new folio's flag and swap + * entries. Replacement will take the new folio's swap entry value as + * the starting offset to override all slots covered by the new folio. + * + * Context: Caller must ensure both folios are locked, and lock the + * swap cache. + */ +void __swap_cache_replace_folio(struct folio *old, struct folio *new) +{ + swp_entry_t entry =3D new->swap; + unsigned long nr_pages =3D folio_nr_pages(new); + struct vswap_cluster *cluster; + struct swp_desc *desc; + unsigned long cluster_id =3D VSWAP_CLUSTER_IDX(entry); + void *old_entry; + int i; + + VM_WARN_ON_ONCE(!folio_test_swapcache(old) || !folio_test_swapcache(new)); + VM_WARN_ON_ONCE(!folio_test_locked(old) || !folio_test_locked(new)); + VM_WARN_ON_ONCE(!entry.val); + + rcu_read_lock(); + cluster =3D xa_load(&vswap_cluster_map, cluster_id); + VM_WARN_ON(!cluster); + + for (i =3D 0; i < nr_pages; i++) { + desc =3D __vswap_iter(cluster, entry.val + i); + VM_WARN_ON(!desc); + old_entry =3D desc->swap_cache; + VM_WARN_ON(!old_entry || xa_is_value(old_entry) || old_entry !=3D old); + desc->swap_cache =3D new; + } + rcu_read_unlock(); +} =20 int vswap_init(void) { --=20 2.47.3