From nobody Tue Dec 2 00:46:01 2025 Received: from mail-pf1-f176.google.com (mail-pf1-f176.google.com [209.85.210.176]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id DD44E2D7DD0 for ; Mon, 24 Nov 2025 19:15:47 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.176 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764011749; cv=none; b=fWH/VXViEiwtNnwdampXebpasm080RscFNc+P06AJde6tmqHTQMN4VHksbI2afcgUlK3kgaaHlAwd1umsf54PiIC3n//0UxqoskLAQhtKTwliQ6GgoqL0kInmmqikb7tsmDp8FHZgxOxq8rLWpqlIrJkJBUowKr60LJ1B1+CpMs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1764011749; c=relaxed/simple; bh=mDR+6KpgZSns5kFDjnZ/kPqYNF5XkIY1kIQ6+gyx8GY=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=qeuSWpMfMBxEuUUlnG2YPPAdSaFjgsycNhLVgiuzDLhFEVZgeNPqou9P0wL/gvgiCBjYGdjmLxKhQo8+smtcNUiWkumVZlxSUVASUCyT6N9/9RjzpkYNNOvS7MH2/vye9S46vU0u5bqiXN9s/mCLCrJreeFtQcXSEz6L0MRQhj8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=BXDUtyNW; arc=none smtp.client-ip=209.85.210.176 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="BXDUtyNW" Received: by mail-pf1-f176.google.com with SMTP id d2e1a72fcca58-7aab061e7cbso5474146b3a.1 for ; Mon, 24 Nov 2025 11:15:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1764011747; x=1764616547; darn=vger.kernel.org; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=wOM3SvUw6equ2pewheYxYcWCJBK1kBvmm5iLOdPjOeg=; b=BXDUtyNWeCS6nDJTXNIImFeBZAjLJNTLDfBqsQkzwoqBOIlFyH6O0P9xyAEtkhTNpm 45OV/7wZUhFv+LvqNMLnYAYCNOXeeZEWuxJqNRDaSSnPh5bfwEPqxGlMYdx6yNLijxGX 25QFj6PSv69jYTgXS7QoABp5B9Y8CN5t5enW66CUTJkEsjWFMWgQkZeTj2FSZ8VcTnSl ox7H2tEwu6vzsOmyhAj12d5ZOwa7swQqPXsCoPCtpj26YvKktnJhqxr9GuPSgG5L68MB lzmcdtNhIwt8dXPz4OCAfE0a1bq6ADSHoh3zISD2q1KqDrv/CK4FeGV84/gS12oxKtdt XHIA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1764011747; x=1764616547; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-gg:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=wOM3SvUw6equ2pewheYxYcWCJBK1kBvmm5iLOdPjOeg=; b=VQDYBJrlW4IxFQh6aluNxVxnx8QxBWl6R51lKeg+KWZbcnhNTQzk9WN+QrnMV30XDQ F+InvEUcqZ30rP5rbcqewaGqpHcqRB+y+YRgu95EKgDiqrkvzJNQGw93h8+tBdDfWE50 TrMqjN0RGJIaSJOD8MnXtTD0Su6DjiGpoXtiCTRvOnq9l3xAA+trsb2Gw/EqLVbQGyv8 iHjunsFX49i7ALzWiIemdl9skK/9a6pHRHSYh0+chg1w3bGp1RwfnEb2Tbzjx8pFXpGC KZTFSBfQ0yBCmPrhDjDzsA29Q3YjR6d3R+NGE1AsM+hNEpup9VS5+xF1vN4E5i4xnjlY sIAw== X-Forwarded-Encrypted: i=1; AJvYcCVixBND7EYyXvyqdrWryFXGFCzq2NdiHUoBzGbT2ZMYZxn38AHGWFFcGjsCsgcuNyfSeF7tu7UW2+SnwJA=@vger.kernel.org X-Gm-Message-State: AOJu0YyM2ztCHRwxfMWj6YsXROvu2FgpezUL+lnwgXGby8i5nznNtLVR jvXRwJP19DHI5b47yDjogNOBb1gSOHLQmzGZfpkGKzbxoP2swVzbw5uf X-Gm-Gg: ASbGncvq39/i4CAG0KGijm2HcKV2lZRDtBNsvA6Pk1cHcHi71CIOVNad5ehFclZD47A wBKAtjkpjpxzedWJ4Y3Gs6gl/CGnZEFpOzAzZd/t4mrJlIA/KzTosTjN3oVA7SJUJTriI4+PiNW +0m3PMGg1ulUPI/unz3pBWxwGLf5LT7f+kpkGFcbVtVacXhx30BaBuTieU4b2ik1TSgsipECY7i SVWmBl/jfh7IoL+iPSADncWeXDh2tdb2d3sjaKMm4YtCghq7EBBpZ/kXN0W8459ffpOgbjGlgnc 5UOi+wD0EgD2RUNsnzhwK74MwaBri3+mpPJNe4gbz9QA+8yhv6NV5vHbtUfBnLUjE9FEFcETTrw iFmr4j6iDDTtIkipfGoiCiPVIdQEkpDOZN6x88flTih5mK7YLkvmWcTtmW/5zXMQ2V2erzvjrlx sa+B0AR5qOxNZLCWF4OSrPTj6kZ7KnLhm6cFEuIy0wj4+KTTFm X-Google-Smtp-Source: AGHT+IF0TVKgirCriQhQIqanjTyXUXt/Fdsy4Pfg34dF7leoPxgPxDVoSG2O3RN3YQxVQcLSx4jiow== X-Received: by 2002:a05:6a20:2449:b0:35e:7605:56bb with SMTP id adf61e73a8af0-3614eeb01a1mr15594864637.59.1764011747013; Mon, 24 Nov 2025 11:15:47 -0800 (PST) Received: from [127.0.0.1] ([101.32.222.185]) by smtp.gmail.com with ESMTPSA id 41be03b00d2f7-bd75def75ffsm14327479a12.3.2025.11.24.11.15.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 24 Nov 2025 11:15:46 -0800 (PST) From: Kairui Song Date: Tue, 25 Nov 2025 03:13:45 +0800 Subject: [PATCH v3 02/19] mm, swap: split swap cache preparation loop into a standalone helper Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20251125-swap-table-p2-v3-2-33f54f707a5c@tencent.com> References: <20251125-swap-table-p2-v3-0-33f54f707a5c@tencent.com> In-Reply-To: <20251125-swap-table-p2-v3-0-33f54f707a5c@tencent.com> To: linux-mm@kvack.org Cc: Andrew Morton , Baoquan He , Barry Song , Chris Li , Nhat Pham , Yosry Ahmed , David Hildenbrand , Johannes Weiner , Youngjun Park , Hugh Dickins , Baolin Wang , Ying Huang , Kemeng Shi , Lorenzo Stoakes , "Matthew Wilcox (Oracle)" , linux-kernel@vger.kernel.org, Kairui Song X-Mailer: b4 0.14.3 X-Developer-Signature: v=1; a=ed25519-sha256; t=1764011730; l=7996; i=kasong@tencent.com; s=kasong-sign-tencent; h=from:subject:message-id; bh=zTQ0qqEjQqVMue2feX7IfMCwec9b88xoOj9LChkN4Ao=; b=/9/Zq1tFDJh0xOp/YKLyGKXbuQJ4l7zdp4kLU0qagMc4+s++qCl9m66xLq5XbU8mElb4CKQ7q NLMt6YLo2oXC5wNj3JSNLWb9ebHRjnpA16dEkppXixKYN6q/02l4FUj X-Developer-Key: i=kasong@tencent.com; a=ed25519; pk=kCdoBuwrYph+KrkJnrr7Sm1pwwhGDdZKcKrqiK8Y1mI= From: Kairui Song To prepare for the removal of swap cache bypass swapin, introduce a new helper that accepts an allocated and charged fresh folio, prepares the folio, the swap map, and then adds the folio to the swap cache. This doesn't change how swap cache works yet, we are still depending on the SWAP_HAS_CACHE in the swap map for synchronization. But all synchronization hacks are now all in this single helper. No feature change. Acked-by: Chris Li Reviewed-by: Barry Song Signed-off-by: Kairui Song --- mm/swap_state.c | 197 +++++++++++++++++++++++++++++++---------------------= ---- 1 file changed, 109 insertions(+), 88 deletions(-) diff --git a/mm/swap_state.c b/mm/swap_state.c index 08252eaef32f..a8511ce43242 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -402,6 +402,97 @@ void swap_update_readahead(struct folio *folio, struct= vm_area_struct *vma, } } =20 +/** + * __swap_cache_prepare_and_add - Prepare the folio and add it to swap cac= he. + * @entry: swap entry to be bound to the folio. + * @folio: folio to be added. + * @gfp: memory allocation flags for charge, can be 0 if @charged if true. + * @charged: if the folio is already charged. + * @skip_if_exists: if the slot is in a cached state, return NULL. + * This is an old workaround that will be removed shortly. + * + * Update the swap_map and add folio as swap cache, typically before swapi= n. + * All swap slots covered by the folio must have a non-zero swap count. + * + * Context: Caller must protect the swap device with reference count or lo= cks. + * Return: Returns the folio being added on success. Returns the existing = folio + * if @entry is already cached. Returns NULL if raced with swapin or swapo= ff. + */ +static struct folio *__swap_cache_prepare_and_add(swp_entry_t entry, + struct folio *folio, + gfp_t gfp, bool charged, + bool skip_if_exists) +{ + struct folio *swapcache; + void *shadow; + int ret; + + /* + * Check and pin the swap map with SWAP_HAS_CACHE, then add the folio + * into the swap cache. Loop with a schedule delay if raced with + * another process setting SWAP_HAS_CACHE. This hackish loop will + * be fixed very soon. + */ + for (;;) { + ret =3D swapcache_prepare(entry, folio_nr_pages(folio)); + if (!ret) + break; + + /* + * The skip_if_exists is for protecting against a recursive + * call to this helper on the same entry waiting forever + * here because SWAP_HAS_CACHE is set but the folio is not + * in the swap cache yet. This can happen today if + * mem_cgroup_swapin_charge_folio() below triggers reclaim + * through zswap, which may call this helper again in the + * writeback path. + * + * Large order allocation also needs special handling on + * race: if a smaller folio exists in cache, swapin needs + * to fallback to order 0, and doing a swap cache lookup + * might return a folio that is irrelevant to the faulting + * entry because @entry is aligned down. Just return NULL. + */ + if (ret !=3D -EEXIST || skip_if_exists || folio_test_large(folio)) + return NULL; + + /* + * Check the swap cache again, we can only arrive + * here because swapcache_prepare returns -EEXIST. + */ + swapcache =3D swap_cache_get_folio(entry); + if (swapcache) + return swapcache; + + /* + * We might race against __swap_cache_del_folio(), and + * stumble across a swap_map entry whose SWAP_HAS_CACHE + * has not yet been cleared. Or race against another + * swap_cache_alloc_folio(), which has set SWAP_HAS_CACHE + * in swap_map, but not yet added its folio to swap cache. + */ + schedule_timeout_uninterruptible(1); + } + + __folio_set_locked(folio); + __folio_set_swapbacked(folio); + + if (!charged && mem_cgroup_swapin_charge_folio(folio, NULL, gfp, entry)) { + put_swap_folio(folio, entry); + folio_unlock(folio); + return NULL; + } + + swap_cache_add_folio(folio, entry, &shadow); + memcg1_swapin(entry, folio_nr_pages(folio)); + if (shadow) + workingset_refault(folio, shadow); + + /* Caller will initiate read into locked folio */ + folio_add_lru(folio); + return folio; +} + /** * swap_cache_alloc_folio - Allocate folio for swapped out slot in swap ca= che. * @entry: the swapped out swap entry to be binded to the folio. @@ -428,99 +519,29 @@ struct folio *swap_cache_alloc_folio(swp_entry_t entr= y, gfp_t gfp_mask, { struct swap_info_struct *si =3D __swap_entry_to_info(entry); struct folio *folio; - struct folio *new_folio =3D NULL; struct folio *result =3D NULL; - void *shadow =3D NULL; =20 *new_page_allocated =3D false; - for (;;) { - int err; - - /* - * Check the swap cache first, if a cached folio is found, - * return it unlocked. The caller will lock and check it. - */ - folio =3D swap_cache_get_folio(entry); - if (folio) - goto got_folio; - - /* - * Just skip read ahead for unused swap slot. - */ - if (!swap_entry_swapped(si, entry)) - goto put_and_return; - - /* - * Get a new folio to read into from swap. Allocate it now if - * new_folio not exist, before marking swap_map SWAP_HAS_CACHE, - * when -EEXIST will cause any racers to loop around until we - * add it to cache. - */ - if (!new_folio) { - new_folio =3D folio_alloc_mpol(gfp_mask, 0, mpol, ilx, numa_node_id()); - if (!new_folio) - goto put_and_return; - } - - /* - * Swap entry may have been freed since our caller observed it. - */ - err =3D swapcache_prepare(entry, 1); - if (!err) - break; - else if (err !=3D -EEXIST) - goto put_and_return; - - /* - * Protect against a recursive call to swap_cache_alloc_folio() - * on the same entry waiting forever here because SWAP_HAS_CACHE - * is set but the folio is not the swap cache yet. This can - * happen today if mem_cgroup_swapin_charge_folio() below - * triggers reclaim through zswap, which may call - * swap_cache_alloc_folio() in the writeback path. - */ - if (skip_if_exists) - goto put_and_return; + /* Check the swap cache again for readahead path. */ + folio =3D swap_cache_get_folio(entry); + if (folio) + return folio; =20 - /* - * We might race against __swap_cache_del_folio(), and - * stumble across a swap_map entry whose SWAP_HAS_CACHE - * has not yet been cleared. Or race against another - * swap_cache_alloc_folio(), which has set SWAP_HAS_CACHE - * in swap_map, but not yet added its folio to swap cache. - */ - schedule_timeout_uninterruptible(1); - } - - /* - * The swap entry is ours to swap in. Prepare the new folio. - */ - __folio_set_locked(new_folio); - __folio_set_swapbacked(new_folio); - - if (mem_cgroup_swapin_charge_folio(new_folio, NULL, gfp_mask, entry)) - goto fail_unlock; - - swap_cache_add_folio(new_folio, entry, &shadow); - memcg1_swapin(entry, 1); + /* Skip allocation for unused swap slot for readahead path. */ + if (!swap_entry_swapped(si, entry)) + return NULL; =20 - if (shadow) - workingset_refault(new_folio, shadow); - - /* Caller will initiate read into locked new_folio */ - folio_add_lru(new_folio); - *new_page_allocated =3D true; - folio =3D new_folio; -got_folio: - result =3D folio; - goto put_and_return; - -fail_unlock: - put_swap_folio(new_folio, entry); - folio_unlock(new_folio); -put_and_return: - if (!(*new_page_allocated) && new_folio) - folio_put(new_folio); + /* Allocate a new folio to be added into the swap cache. */ + folio =3D folio_alloc_mpol(gfp_mask, 0, mpol, ilx, numa_node_id()); + if (!folio) + return NULL; + /* Try add the new folio, returns existing folio or NULL on failure. */ + result =3D __swap_cache_prepare_and_add(entry, folio, gfp_mask, + false, skip_if_exists); + if (result =3D=3D folio) + *new_page_allocated =3D true; + else + folio_put(folio); return result; } =20 --=20 2.52.0