From nobody Mon Feb 9 11:29:46 2026 Received: from mail-pf1-f177.google.com (mail-pf1-f177.google.com [209.85.210.177]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1B4EB2EE617 for ; Sun, 25 Jan 2026 17:58:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.177 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769363911; cv=none; b=oyo0rdmBwDFqbMQ+LYRj7tOJFmzwkRoc8Zvdyb92PUhTBRYi8wNDPy12uPOaMXvbk/euFMruW8MvpJC0HK04xKN+Nggao7ENZL0KOIWhSpsoLQDaU0CEDj6XCzL5feBhsHOvVnQQpAa/SmoTwlEh4oSr5V34iFdjIsEOJhJeslY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769363911; c=relaxed/simple; bh=9JG/XV5J/7frqVIB+Q/fkHIf5KnU2m3Pr3OBIKUnbQc=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=s0tPko09rhysTkYd6rufOJeLU2bzcp2PQs98Rcr1XktvpR8jpx25eDosV6Nv+Fa0H9dpvSb2NFUzcPAzjoape4xX1HxhPaP2wt4Z+HCZRUG1ck+yEWXQhLWHiBBKC7iyjNJE49suvvTXZPpgRgoVeJl2PlbjESMu4X0+2l9HWUQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=FqCPiHE+; arc=none smtp.client-ip=209.85.210.177 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="FqCPiHE+" Received: by mail-pf1-f177.google.com with SMTP id d2e1a72fcca58-8230c33f477so1526565b3a.2 for ; Sun, 25 Jan 2026 09:58:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1769363909; x=1769968709; darn=vger.kernel.org; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=fnn5pIpGyxOH5+PjxlLrdlv4INj5zaNmBOfsLC8NtaY=; b=FqCPiHE+PHy0tuvsk1rlhDJxBK1940R6yv6F4RnWbPY90cnOTet6bPJEfNFAV/3+pf W7uoc1yrUwO/Jm2SRfIOY6mV4FFMI4E1hywR4Bba60sJtqw1EJ3vmscDW9knH7TI+Woa R59FIsredxTgdU8diUPQ758JqfF1/KRMJIT82vgts1PpRRLUk3urWRJMrYG9kZFkYk+x tqiCPCiGNUz0TxBTmM/tSnG/BRKGYHEAkS0+Ovx9zTWkwwSuY0zAkVr5yhzHj63b9FWV 5AGWssEt/2yU5mtuZufFYYapmEz9n1xskv+ScBNqFEoXHG8TF+CRac1m3QUcSn5edqhf E4+w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1769363909; x=1769968709; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-gg:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=fnn5pIpGyxOH5+PjxlLrdlv4INj5zaNmBOfsLC8NtaY=; b=ir7W/9oohMgFvVK2Yoyx1k/Lkl9whH6DEQArRcd1l16n5B3hTHVAC4WFHvUHmiijKy ftg06gNqFhxZ5u3RUvpIXGvpH7AVo3zAF0TTJ45zRpS1nOIvpQNFhUzJNlvCLv17mTrv AQF5G35GEceRpM1dO482gCs0vrlZ3hI47uScXZmhQLXb6Mb/BWrjZCKIpiA0SZVJAPCs X934Eqpk27msgLxs+N2tQ1ro6TMHpu1A9ySmcj13tXwij/WO6cz1bJYtR0HQKPAf+yhD cy8iWACU7lzBZ9RR7xCcRDflJtTvCwCr0nwXvCZxgt3FMgCQiR7KfEvfPqEJBiPKzbPr sgZg== X-Forwarded-Encrypted: i=1; AJvYcCWb+UqIkYbYK8YwPItF/fClCoaF6Pba/sPWW/PTMZJRIi30HDkNKld083qqYnoYK8Tr+RHQ9zkFHyG1LQY=@vger.kernel.org X-Gm-Message-State: AOJu0Yy+IQOZjjqUdFvMTrDlLfirrj9eiS8Nfkv9wA9Vbq1M8g5+ZNmY zANMo/Vx2ZRGO9kpBHEikRKYjZQteDG4wp7cJLgJ2fSMaj3ih3TenlPo X-Gm-Gg: AZuq6aLDapLovu5PHpisma/gKxOe0sOlO3LjGsrz173bJ4TuQtNIwfyMms4i5KzmVAa S+BB/WOi5W5xeultng9GPBSDbPZULntloPMTCi+xCNj7MJvY5BQwWi6/qWNio5J9dWL08Sn8T7z n/tutK4d1kGsQnbQx1GwcRMa2OFSJC1l3ScFlFnJME8UAv7xBHnpUoe37lXgJmHbGgbq3BwPD+1 PqvlsgIl9ckgGPYZpjBsnPqTYFF2bLJT0nrbLN9mmsg4IHtweu3o+1D9C7XQhaTnQZK5GF/fwgi W0wP+Tt4eeqn6TK1M8ayF8N5hBPsKd4nybyQLKDjPviYpfh/pvciWI0KbKKNWLDo5jlym7ByK4Y e64K4Yp0WB3YQ0BrW3DHuRPbpi8b4a6uYzHk2pBTquLgjOZ50wW1DeAdYHMKbfRCQID25J8+cbZ aRw6LOfVZ+7zuYgf0NPJrjgM1tx/sbYpYMjgePc4h0b2jXZ3N4 X-Received: by 2002:a05:6a00:4649:b0:81f:9a5b:e8fc with SMTP id d2e1a72fcca58-823412f82f3mr1913955b3a.54.1769363909394; Sun, 25 Jan 2026 09:58:29 -0800 (PST) Received: from [127.0.0.1] ([101.32.222.185]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-8231876e718sm7405963b3a.62.2026.01.25.09.58.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 25 Jan 2026 09:58:27 -0800 (PST) From: Kairui Song Date: Mon, 26 Jan 2026 01:57:29 +0800 Subject: [PATCH 06/12] mm, swap: implement helpers for reserving data in the swap table Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260126-swap-table-p3-v1-6-a74155fab9b0@tencent.com> References: <20260126-swap-table-p3-v1-0-a74155fab9b0@tencent.com> In-Reply-To: <20260126-swap-table-p3-v1-0-a74155fab9b0@tencent.com> To: linux-mm@kvack.org Cc: Andrew Morton , Kemeng Shi , Nhat Pham , Baoquan He , Barry Song , Johannes Weiner , David Hildenbrand , Lorenzo Stoakes , linux-kernel@vger.kernel.org, Chris Li , Kairui Song X-Mailer: b4 0.14.3 X-Developer-Signature: v=1; a=ed25519-sha256; t=1769363877; l=8400; i=kasong@tencent.com; s=kasong-sign-tencent; h=from:subject:message-id; bh=4fT4YuQVwXd0BVzfEj6WLA1Z9x9x2nMJyWoNmCPj5xw=; b=TpUkgudnYLmqTnl6Pa5LFXR+VM2YIcaeBS5a2Q34muNpI/zXkuLP/+e9Vvc5T9NyztUZWZ6h1 5OPJ5B730QCA4O7CR/08fSidaUy1ugaaKpJop0MiU4yKkCoKOLC82Qi X-Developer-Key: i=kasong@tencent.com; a=ed25519; pk=kCdoBuwrYph+KrkJnrr7Sm1pwwhGDdZKcKrqiK8Y1mI= From: Kairui Song To prepare for using the swap table as the unified swap layer, introduce macros and helpers for storing multiple kinds of data in a swap table entry. From now on, we are storing PFN in the swap table to make space for extra counting bits (SWAP_COUNT). Shadows are still stored as they are, as the SWAP_COUNT is not used yet. Also, rename shadow_swp_to_tb to shadow_to_swp_tb; that's a spelling error, not really worth a separate fix. No behaviour change yet, just prepare the API. Signed-off-by: Kairui Song --- mm/swap_state.c | 6 +-- mm/swap_table.h | 124 +++++++++++++++++++++++++++++++++++++++++++++++++++-= ---- 2 files changed, 117 insertions(+), 13 deletions(-) diff --git a/mm/swap_state.c b/mm/swap_state.c index 6d0eef7470be..e213ee35c1d2 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -148,7 +148,7 @@ void __swap_cache_add_folio(struct swap_cluster_info *c= i, VM_WARN_ON_ONCE_FOLIO(folio_test_swapcache(folio), folio); VM_WARN_ON_ONCE_FOLIO(!folio_test_swapbacked(folio), folio); =20 - new_tb =3D folio_to_swp_tb(folio); + new_tb =3D folio_to_swp_tb(folio, 0); ci_start =3D swp_cluster_offset(entry); ci_off =3D ci_start; ci_end =3D ci_start + nr_pages; @@ -249,7 +249,7 @@ void __swap_cache_del_folio(struct swap_cluster_info *c= i, struct folio *folio, VM_WARN_ON_ONCE_FOLIO(folio_test_writeback(folio), folio); =20 si =3D __swap_entry_to_info(entry); - new_tb =3D shadow_swp_to_tb(shadow); + new_tb =3D shadow_to_swp_tb(shadow, 0); ci_start =3D swp_cluster_offset(entry); ci_end =3D ci_start + nr_pages; ci_off =3D ci_start; @@ -331,7 +331,7 @@ void __swap_cache_replace_folio(struct swap_cluster_inf= o *ci, VM_WARN_ON_ONCE(!entry.val); =20 /* Swap cache still stores N entries instead of a high-order entry */ - new_tb =3D folio_to_swp_tb(new); + new_tb =3D folio_to_swp_tb(new, 0); do { old_tb =3D __swap_table_xchg(ci, ci_off, new_tb); WARN_ON_ONCE(!swp_tb_is_folio(old_tb) || swp_tb_to_folio(old_tb) !=3D ol= d); diff --git a/mm/swap_table.h b/mm/swap_table.h index 10e11d1f3b04..9c4083e4e4f2 100644 --- a/mm/swap_table.h +++ b/mm/swap_table.h @@ -12,17 +12,72 @@ struct swap_table { }; =20 #define SWP_TABLE_USE_PAGE (sizeof(struct swap_table) =3D=3D PAGE_SIZE) -#define SWP_TB_COUNT_BITS 4 =20 /* * A swap table entry represents the status of a swap slot on a swap * (physical or virtual) device. The swap table in each cluster is a * 1:1 map of the swap slots in this cluster. * - * Each swap table entry could be a pointer (folio), a XA_VALUE - * (shadow), or NULL. + * Swap table entry type and bits layouts: + * + * NULL: |---------------- 0 ---------------| - Free slot + * Shadow: | SWAP_COUNT |---- SHADOW_VAL ---|1| - Swapped out slot + * PFN: | SWAP_COUNT |------ PFN -------|10| - Cached slot + * Pointer: |----------- Pointer ----------|100| - (Unused) + * Bad: |------------- 1 -------------|1000| - Bad slot + * + * SWAP_COUNT is `SWP_TB_COUNT_BITS` long, each entry is an atomic long. + * + * Usages: + * + * - NULL: Swap slot is unused, could be allocated. + * + * - Shadow: Swap slot is used and not cached (usually swapped out). It re= uses + * the XA_VALUE format to be compatible with working set shadows. SHADOW= _VAL + * part might be all 0 if the working shadow info is absent. In such a c= ase, + * we still want to keep the shadow format as a placeholder. + * + * Memcg ID is embedded in SHADOW_VAL. + * + * - PFN: Swap slot is in use, and cached. Memcg info is recorded on the p= age + * struct. + * + * - Pointer: Unused yet. `0b100` is reserved for potential pointer usage + * because only the lower three bits can be used as a marker for 8 bytes + * aligned pointers. + * + * - Bad: Swap slot is reserved, protects swap header or holes on swap dev= ices. */ =20 +/* Common SWAP_COUNT part */ +#define SWP_TB_COUNT_BITS 4 /* This can be shrunk or extended if needed */ +#define SWP_TB_COUNT_MASK (~((~0UL) >> SWP_TB_COUNT_BITS)) +#define SWP_TB_COUNT_SHIFT (BITS_PER_LONG - SWP_TB_COUNT_BITS) +#define SWP_TB_COUNT_MAX ((1 << SWP_TB_COUNT_BITS) - 2) + +/* NULL Entry, all 0 */ +#define SWP_TB_NULL 0UL + +/* Swapped out: Shadow */ +#define SWP_TB_SHADOW_MARK 0b1UL + +/* Cached: PFN */ +#define SWP_TB_PFN_MASK ((~0UL) >> SWP_TB_COUNT_BITS) +#define SWP_TB_PFN_MARK 0b10UL +#define SWP_TB_PFN_MARK_BITS 2 +#define SWP_TB_PFN_MARK_MASK (BIT(SWP_TB_PFN_MARK_BITS) - 1) + +/* Bad slot, ends with 0b1000 and rests of bits are all 1 */ +#define SWP_TB_BAD ((~0UL) << 3) + +#if defined(MAX_POSSIBLE_PHYSMEM_BITS) +#define SWAP_CACHE_PFN_BITS (MAX_POSSIBLE_PHYSMEM_BITS - PAGE_SHIFT) +#elif defined(MAX_PHYSMEM_BITS) +#define SWAP_CACHE_PFN_BITS (MAX_PHYSMEM_BITS - PAGE_SHIFT) +#else +#define SWAP_CACHE_PFN_BITS (BITS_PER_LONG - PAGE_SHIFT) +#endif + /* Macro for shadow offset calculation */ #define SWAP_COUNT_SHIFT SWP_TB_COUNT_BITS =20 @@ -35,18 +90,41 @@ static inline unsigned long null_to_swp_tb(void) return 0; } =20 -static inline unsigned long folio_to_swp_tb(struct folio *folio) +static inline unsigned long __count_to_swp_tb(unsigned char count) { + VM_WARN_ON(count > SWP_TB_COUNT_MAX); + return ((unsigned long)count) << SWP_TB_COUNT_SHIFT; +} + +static inline unsigned long pfn_to_swp_tb(unsigned long pfn, unsigned int = count) +{ + unsigned long swp_tb; + BUILD_BUG_ON(sizeof(unsigned long) !=3D sizeof(void *)); - return (unsigned long)folio; + BUILD_BUG_ON(SWAP_CACHE_PFN_BITS > + (BITS_PER_LONG - SWP_TB_PFN_MARK_BITS - SWP_TB_COUNT_BITS)); + + swp_tb =3D (pfn << SWP_TB_PFN_MARK_BITS) | SWP_TB_PFN_MARK; + VM_WARN_ON_ONCE(swp_tb & SWP_TB_COUNT_MASK); + + return swp_tb | __count_to_swp_tb(count); } =20 -static inline unsigned long shadow_swp_to_tb(void *shadow) +static inline unsigned long folio_to_swp_tb(struct folio *folio, unsigned = int count) +{ + return pfn_to_swp_tb(folio_pfn(folio), count); +} + +static inline unsigned long shadow_to_swp_tb(void *shadow, unsigned int co= unt) { BUILD_BUG_ON((BITS_PER_XA_VALUE + 1) !=3D BITS_PER_BYTE * sizeof(unsigned long)); + BUILD_BUG_ON((unsigned long)xa_mk_value(0) !=3D SWP_TB_SHADOW_MARK); + VM_WARN_ON_ONCE(shadow && !xa_is_value(shadow)); - return (unsigned long)shadow; + VM_WARN_ON_ONCE(shadow && ((unsigned long)shadow & SWP_TB_COUNT_MASK)); + + return (unsigned long)shadow | __count_to_swp_tb(count) | SWP_TB_SHADOW_M= ARK; } =20 /* @@ -59,7 +137,7 @@ static inline bool swp_tb_is_null(unsigned long swp_tb) =20 static inline bool swp_tb_is_folio(unsigned long swp_tb) { - return !xa_is_value((void *)swp_tb) && !swp_tb_is_null(swp_tb); + return ((swp_tb & SWP_TB_PFN_MARK_MASK) =3D=3D SWP_TB_PFN_MARK); } =20 static inline bool swp_tb_is_shadow(unsigned long swp_tb) @@ -67,19 +145,43 @@ static inline bool swp_tb_is_shadow(unsigned long swp_= tb) return xa_is_value((void *)swp_tb); } =20 +static inline bool swp_tb_is_bad(unsigned long swp_tb) +{ + return swp_tb =3D=3D SWP_TB_BAD; +} + +static inline bool swp_tb_is_countable(unsigned long swp_tb) +{ + return (swp_tb_is_shadow(swp_tb) || swp_tb_is_folio(swp_tb) || + swp_tb_is_null(swp_tb)); +} + /* * Helpers for retrieving info from swap table. */ static inline struct folio *swp_tb_to_folio(unsigned long swp_tb) { VM_WARN_ON(!swp_tb_is_folio(swp_tb)); - return (void *)swp_tb; + return pfn_folio((swp_tb & SWP_TB_PFN_MASK) >> SWP_TB_PFN_MARK_BITS); } =20 static inline void *swp_tb_to_shadow(unsigned long swp_tb) { VM_WARN_ON(!swp_tb_is_shadow(swp_tb)); - return (void *)swp_tb; + return (void *)(swp_tb & ~SWP_TB_COUNT_MASK); +} + +static inline unsigned char __swp_tb_get_count(unsigned long swp_tb) +{ + VM_WARN_ON(!swp_tb_is_countable(swp_tb)); + return ((swp_tb & SWP_TB_COUNT_MASK) >> SWP_TB_COUNT_SHIFT); +} + +static inline int swp_tb_get_count(unsigned long swp_tb) +{ + if (swp_tb_is_countable(swp_tb)) + return __swp_tb_get_count(swp_tb); + return -EINVAL; } =20 /* @@ -124,6 +226,8 @@ static inline unsigned long swap_table_get(struct swap_= cluster_info *ci, atomic_long_t *table; unsigned long swp_tb; =20 + VM_WARN_ON_ONCE(off >=3D SWAPFILE_CLUSTER); + rcu_read_lock(); table =3D rcu_dereference(ci->table); swp_tb =3D table ? atomic_long_read(&table[off]) : null_to_swp_tb(); --=20 2.52.0