From nobody Tue Dec 16 13:21:44 2025 Received: from mail-pf1-f171.google.com (mail-pf1-f171.google.com [209.85.210.171]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6288A34F482 for ; Thu, 4 Dec 2025 19:30:53 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.171 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764876655; cv=none; b=Q/xt+ezarMr/fH0HvwLThkkQ1sBGKkIuc3jDja3Aoo2NK5ALizmZjwHb5sKoDFofV++foy3zIEHl9lQi02FoGFjCv/ghJDGXX9D5pKvWRefhVLHj2My9QXwgVL3Tau+vKOe1IV5J+dfQBFImawI631NYPQ+3ntsyWurWruz+mbA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764876655; c=relaxed/simple; bh=jCV1Q+M9Nv3xegku8uHENy+iBGpOXNdcIfiLgMZZqqk=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=Ld4xogH85l5tteJOyuBTTgCrjHmFmLStb80Tz+Y258pt+bvf5bCZcb1fRXhOHAIYP95U0P94Kc8WTsvX6xMg+jwJ/R6tfa4RQFU+4JVaTzvoilFTtXToQT8zmX+A35dlROHvPidWZ3UnszRZanG/POaR4xQY/uMmHcbdK4p2PBY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=XXO9xl16; arc=none smtp.client-ip=209.85.210.171 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="XXO9xl16" Received: by mail-pf1-f171.google.com with SMTP id d2e1a72fcca58-7b7828bf7bcso1511606b3a.2 for ; Thu, 04 Dec 2025 11:30:53 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1764876652; x=1765481452; darn=vger.kernel.org; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=Z+OV+xfMG8vEK+0t4KO8+hqW6O39H5Rzxm6GGnh7ZqQ=; b=XXO9xl16cFtrPVidPiiwT3AOpkTMnBSvoA7xJmLsnbaVXrItIb94NaqURYVwDMMECA gch7Qdjo8shVDmf1GeMR05FSdZ4U5LIXp3Q31CWadVQpCgRa/bINCFSp2rT+57cQyjFT wnx0w9qgw06CKY5fvkIgWo70UFs93TdRWxh1g1uvE4dhQBqU400HFGVMOQsJ0Mma+Uue robrKO3zI9KGyd96kyEXufIluxgxmwF/HZDSPBHoKMAB1uvSzGVOBbLVK6gBIV+rZL+n hGU51H41T7U2dVMJ88gtsvL8HN9WQXOIYGR/bTrRZQrbWnXWoPzyCZ31gd/6RdF9jkov 0J9A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1764876652; x=1765481452; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-gg:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=Z+OV+xfMG8vEK+0t4KO8+hqW6O39H5Rzxm6GGnh7ZqQ=; b=p0ZT+W23QcD4Vi5SRj50oLUQ2fZv0ApDRoqP9b8srBNuhCr1fi/j9LxKKKY2MLuw4z Y6wi4YaK/1GRV0PCKgT0et6aUq7Ko2WRD7r5Z1H6YhRTE9q1YcGnZmCWo4ZZIQtK2Zo/ Y0DMjPhQLD7ZNlZrqd1mXBsz6lVY/eB5l8YG5yXg18TDxJlEAd7PKJixwJq/x1qiGkpX sLsObBAueibESpBTy/QaBOnwLr/heikQwXZqHPHpAcz55Raeek1hErwEL/929G9znypc n0DoZ5dvp0JsQUPxb8M5YnbZyIj7L30cD3Us+alK1f9JfpaSy9Jq+ORW1MTsDVT6RJ41 SPXg== X-Forwarded-Encrypted: i=1; AJvYcCUBDLv8LfBqruRq9F3bUIbGqL1Um+5ZqDHacblQ333h+7sNXYq6i2/I6X0dgiYOLmKo/glw0HZZ4RMdaHQ=@vger.kernel.org X-Gm-Message-State: AOJu0YwrHQR43ITDvggflkmR9Zt/6iQa5rejkTj1k/aTW8NKOtJwrFNt 0hz7OVnuaGQctdHbgDvCzZXYlkE+A9HBDIIKrMU+Q2hvYV2Y55c+Skfw X-Gm-Gg: ASbGncsei7Y1+ryg9KMGrRzCn+DgPRS5hbD/xDHU+iBH7YLtXsvHAiLRa0CzEShFjZS 37Amvao6C8H7aSmF4F7rqgSmue2ba7Sf9p0DWFxl6syx7/5sRgRJ4TfbZe1LldvcSgGn58xCdij 3/+OCO4Cp3fmygA4+iUv6j8PLrr/kxYF5/X87huE+dMlrPTyzuhoQXuW3ejOS7RMDNJdo5eIGWV 4BQnYkovps8t0cyULepqzGXck51LpxxGFrDg+OO38UDf9MACf1LwE97K8QLezgsqQWnwfhkxUYO T2cOlVLYzA6Ll9LwHsa1m3LtBkFZ+9Q/fm7qaBPqLWrmEl0WVSrA8ek+6F/UHrsAySWvZCY2D6E c7Qlg+ziVgABd6NL18NkyX7GTfLmPnvwulHdhf9dgGa2nD3uNuw78+UXWR6i0JO7x/nzWWdGOC6 8CLs4VeMprEg3xltYaYNC+jWhHvHqK3jIcG+0aO8p3UYDv/R4T X-Google-Smtp-Source: AGHT+IFrS4+YI1PtHu4EckGeptTeEt08RBywVMSqzRAmb+q3EmeaVDzM97Qx8rxRv/8t3S8At73Uvw== X-Received: by 2002:a05:6a20:158d:b0:35d:3bcf:e518 with SMTP id adf61e73a8af0-363f5bd10bdmr8778613637.0.1764876652406; Thu, 04 Dec 2025 11:30:52 -0800 (PST) Received: from [127.0.0.1] ([101.32.222.185]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-bf686b3b5a9sm2552926a12.9.2025.12.04.11.30.48 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 04 Dec 2025 11:30:51 -0800 (PST) From: Kairui Song Date: Fri, 05 Dec 2025 03:29:24 +0800 Subject: [PATCH v4 16/19] mm, swap: check swap table directly for checking cache Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20251205-swap-table-p2-v4-16-cb7e28a26a40@tencent.com> References: <20251205-swap-table-p2-v4-0-cb7e28a26a40@tencent.com> In-Reply-To: <20251205-swap-table-p2-v4-0-cb7e28a26a40@tencent.com> To: linux-mm@kvack.org Cc: Andrew Morton , Baoquan He , Barry Song , Chris Li , Nhat Pham , Yosry Ahmed , David Hildenbrand , Johannes Weiner , Youngjun Park , Hugh Dickins , Baolin Wang , Ying Huang , Kemeng Shi , Lorenzo Stoakes , "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, Kairui Song X-Mailer: b4 0.14.3 X-Developer-Signature: v=1; a=ed25519-sha256; t=1764876574; l=7912; i=kasong@tencent.com; s=kasong-sign-tencent; h=from:subject:message-id; bh=VV8qeY6YSf7kGi1o5LI7gaOuEu3UiPZDdvkv5SrF01I=; b=ocSLlXivLRrvpyKbP8eKxqsuGmMYp+DRkuAoS4awZfncg3+dcSmilhcrfoxNhmGF8DtGfm73a 1Jd4UXJDxdwCddAP2vi11cwknzaDppHMO2+KTFKuhwD6QBl8DZCoKuY X-Developer-Key: i=kasong@tencent.com; a=ed25519; pk=kCdoBuwrYph+KrkJnrr7Sm1pwwhGDdZKcKrqiK8Y1mI= From: Kairui Song Instead of looking at the swap map, check swap table directly to tell if a swap slot is cached. Prepares for the removal of SWAP_HAS_CACHE. Signed-off-by: Kairui Song --- mm/swap.h | 11 ++++++++--- mm/swap_state.c | 16 ++++++++++++++++ mm/swapfile.c | 55 +++++++++++++++++++++++++++++-----------------------= --- mm/userfaultfd.c | 10 +++------- 4 files changed, 56 insertions(+), 36 deletions(-) diff --git a/mm/swap.h b/mm/swap.h index ec1ef7d0c35b..3692e143eeba 100644 --- a/mm/swap.h +++ b/mm/swap.h @@ -275,6 +275,7 @@ void __swapcache_clear_cached(struct swap_info_struct *= si, * swap entries in the page table, similar to locking swap cache folio. * - See the comment of get_swap_device() for more complex usage. */ +bool swap_cache_has_folio(swp_entry_t entry); struct folio *swap_cache_get_folio(swp_entry_t entry); void *swap_cache_get_shadow(swp_entry_t entry); void swap_cache_del_folio(struct folio *folio); @@ -335,8 +336,6 @@ static inline int swap_zeromap_batch(swp_entry_t entry,= int max_nr, =20 static inline int non_swapcache_batch(swp_entry_t entry, int max_nr) { - struct swap_info_struct *si =3D __swap_entry_to_info(entry); - pgoff_t offset =3D swp_offset(entry); int i; =20 /* @@ -345,8 +344,9 @@ static inline int non_swapcache_batch(swp_entry_t entry= , int max_nr) * be in conflict with the folio in swap cache. */ for (i =3D 0; i < max_nr; i++) { - if ((si->swap_map[offset + i] & SWAP_HAS_CACHE)) + if (swap_cache_has_folio(entry)) return i; + entry.val++; } =20 return i; @@ -449,6 +449,11 @@ static inline int swap_writeout(struct folio *folio, return 0; } =20 +static inline bool swap_cache_has_folio(swp_entry_t entry) +{ + return false; +} + static inline struct folio *swap_cache_get_folio(swp_entry_t entry) { return NULL; diff --git a/mm/swap_state.c b/mm/swap_state.c index f478a16f43e9..6bf7556ca408 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -103,6 +103,22 @@ struct folio *swap_cache_get_folio(swp_entry_t entry) return NULL; } =20 +/** + * swap_cache_has_folio - Check if a swap slot has cache. + * @entry: swap entry indicating the slot. + * + * Context: Caller must ensure @entry is valid and protect the swap + * device with reference count or locks. + */ +bool swap_cache_has_folio(swp_entry_t entry) +{ + unsigned long swp_tb; + + swp_tb =3D swap_table_get(__swap_entry_to_cluster(entry), + swp_cluster_offset(entry)); + return swp_tb_is_folio(swp_tb); +} + /** * swap_cache_get_shadow - Looks up a shadow in the swap cache. * @entry: swap entry used for the lookup. diff --git a/mm/swapfile.c b/mm/swapfile.c index aaa8790241a8..2cb3bfef3234 100644 --- a/mm/swapfile.c +++ b/mm/swapfile.c @@ -792,23 +792,18 @@ static bool cluster_reclaim_range(struct swap_info_st= ruct *si, unsigned int nr_pages =3D 1 << order; unsigned long offset =3D start, end =3D start + nr_pages; unsigned char *map =3D si->swap_map; - int nr_reclaim; + unsigned long swp_tb; =20 spin_unlock(&ci->lock); do { - switch (READ_ONCE(map[offset])) { - case 0: + if (swap_count(READ_ONCE(map[offset]))) break; - case SWAP_HAS_CACHE: - nr_reclaim =3D __try_to_reclaim_swap(si, offset, TTRS_ANYWAY); - if (nr_reclaim < 0) - goto out; - break; - default: - goto out; + swp_tb =3D swap_table_get(ci, offset % SWAPFILE_CLUSTER); + if (swp_tb_is_folio(swp_tb)) { + if (__try_to_reclaim_swap(si, offset, TTRS_ANYWAY) < 0) + break; } } while (++offset < end); -out: spin_lock(&ci->lock); =20 /* @@ -829,37 +824,41 @@ static bool cluster_reclaim_range(struct swap_info_st= ruct *si, * Recheck the range no matter reclaim succeeded or not, the slot * could have been be freed while we are not holding the lock. */ - for (offset =3D start; offset < end; offset++) - if (READ_ONCE(map[offset])) + for (offset =3D start; offset < end; offset++) { + swp_tb =3D __swap_table_get(ci, offset % SWAPFILE_CLUSTER); + if (swap_count(map[offset]) || !swp_tb_is_null(swp_tb)) return false; + } =20 return true; } =20 static bool cluster_scan_range(struct swap_info_struct *si, struct swap_cluster_info *ci, - unsigned long start, unsigned int nr_pages, + unsigned long offset, unsigned int nr_pages, bool *need_reclaim) { - unsigned long offset, end =3D start + nr_pages; + unsigned long end =3D offset + nr_pages; unsigned char *map =3D si->swap_map; + unsigned long swp_tb; =20 if (cluster_is_empty(ci)) return true; =20 - for (offset =3D start; offset < end; offset++) { - switch (READ_ONCE(map[offset])) { - case 0: - continue; - case SWAP_HAS_CACHE: + do { + if (swap_count(map[offset])) + return false; + swp_tb =3D __swap_table_get(ci, offset % SWAPFILE_CLUSTER); + if (swp_tb_is_folio(swp_tb)) { + WARN_ON_ONCE(!(map[offset] & SWAP_HAS_CACHE)); if (!vm_swap_full()) return false; *need_reclaim =3D true; - continue; - default: - return false; + } else { + /* A entry with no count and no cache must be null */ + VM_WARN_ON_ONCE(!swp_tb_is_null(swp_tb)); } - } + } while (++offset < end); =20 return true; } @@ -1030,7 +1029,8 @@ static void swap_reclaim_full_clusters(struct swap_in= fo_struct *si, bool force) to_scan--; =20 while (offset < end) { - if (READ_ONCE(map[offset]) =3D=3D SWAP_HAS_CACHE) { + if (!swap_count(READ_ONCE(map[offset])) && + swp_tb_is_folio(__swap_table_get(ci, offset % SWAPFILE_CLUSTER))) { spin_unlock(&ci->lock); nr_reclaim =3D __try_to_reclaim_swap(si, offset, TTRS_ANYWAY); @@ -1980,6 +1980,7 @@ void swap_put_entries_direct(swp_entry_t entry, int n= r) struct swap_info_struct *si; bool any_only_cache =3D false; unsigned long offset; + unsigned long swp_tb; =20 si =3D get_swap_device(entry); if (WARN_ON_ONCE(!si)) @@ -2004,7 +2005,9 @@ void swap_put_entries_direct(swp_entry_t entry, int n= r) */ for (offset =3D start_offset; offset < end_offset; offset +=3D nr) { nr =3D 1; - if (READ_ONCE(si->swap_map[offset]) =3D=3D SWAP_HAS_CACHE) { + swp_tb =3D swap_table_get(__swap_offset_to_cluster(si, offset), + offset % SWAPFILE_CLUSTER); + if (!swap_count(READ_ONCE(si->swap_map[offset])) && swp_tb_is_folio(swp_= tb)) { /* * Folios are always naturally aligned in swap so * advance forward to the next boundary. Zero means no diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c index e6dfd5f28acd..3f28aa319988 100644 --- a/mm/userfaultfd.c +++ b/mm/userfaultfd.c @@ -1190,17 +1190,13 @@ static int move_swap_pte(struct mm_struct *mm, stru= ct vm_area_struct *dst_vma, * Check if the swap entry is cached after acquiring the src_pte * lock. Otherwise, we might miss a newly loaded swap cache folio. * - * Check swap_map directly to minimize overhead, READ_ONCE is sufficient. * We are trying to catch newly added swap cache, the only possible case= is * when a folio is swapped in and out again staying in swap cache, using= the * same entry before the PTE check above. The PTL is acquired and releas= ed - * twice, each time after updating the swap_map's flag. So holding - * the PTL here ensures we see the updated value. False positive is poss= ible, - * e.g. SWP_SYNCHRONOUS_IO swapin may set the flag without touching the - * cache, or during the tiny synchronization window between swap cache a= nd - * swap_map, but it will be gone very quickly, worst result is retry jit= ters. + * twice, each time after updating the swap table. So holding + * the PTL here ensures we see the updated value. */ - if (READ_ONCE(si->swap_map[swp_offset(entry)]) & SWAP_HAS_CACHE) { + if (swap_cache_has_folio(entry)) { double_pt_unlock(dst_ptl, src_ptl); return -EAGAIN; } --=20 2.52.0