From nobody Mon Feb 9 09:09:16 2026 Received: from mail-pf1-f182.google.com (mail-pf1-f182.google.com [209.85.210.182]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F344C2EDD50 for ; Sun, 25 Jan 2026 17:58:24 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.182 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769363906; cv=none; b=ZCS4KVPp7HpRYOE3ADTvq0P6lAj/gm7X6R5KTW8MHTv/WItFa47BxjOcnEbTlWB6fg2JnGO3l2EQdjuXUbwY25204wJaSZxntd7Kr30krj3jD8XwDqC4cukB2D+89iy9JikonVfRQ2jSmgAhIM6gBMiGlfgA1WGDCz/XG8uOWsc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769363906; c=relaxed/simple; bh=kkyIO7qcZVQTh3+ze8oW5mG1QCXOTV0RvL1Yuf71ALQ=; h=From:Date:Subject:MIME-Version:Content-Type:Message-Id:References: In-Reply-To:To:Cc; b=h0kw8d3FM75JdDVs6x01m1jHFpsJ1UOzx+CwlLAfH9behE+OUGFl7wF33vvPUDZw2u0DcepRVnYy7D75c0N57U1KHH+xJKAx4IQ2NjmFsz6wirUY0ZbwrYOftelZav87MY0V2J7ZL3ec5AMSau+eGHXy289bh6//+7KGQR1o74o= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=ibYLZfwm; arc=none smtp.client-ip=209.85.210.182 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="ibYLZfwm" Received: by mail-pf1-f182.google.com with SMTP id d2e1a72fcca58-81dab89f286so1862534b3a.2 for ; Sun, 25 Jan 2026 09:58:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1769363904; x=1769968704; darn=vger.kernel.org; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:from:to:cc:subject:date:message-id :reply-to; bh=CkTKKula0DQd5wgE6uWfgSKH0QJhpKGSFifSGYvHSoo=; b=ibYLZfwmEwBYuLCiGOWHzqeaZ24NwyP6aEYb6Z9ZxD/Ucttq9pG+D/cm+i6CuEd6ej T7e/vGDOwcArUawVwRbbR5VJa0zmG0ULr/QIQYGU6vD2ydnQZ832lunp4/JohdsTzZRn iY6Q3l1NXZpifb/TzksrEYp3YQuGc6laDJrxmruwIONuHFWW0jPlpuXEcFvBKrIdMhR9 PHQtd2qlLCJqjvfoNTb26WcIDlt6TbIq3QJuw+l5/+qYYfKoNdTaao2lFf9snpaV2FHF BZK5G1brd8JQm5I1H6MYrtkBWY/d4sXa/2OqWrQ0SUeECe3gY4qB0Don/dwAb2IlvYUx eauQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1769363904; x=1769968704; h=cc:to:in-reply-to:references:message-id:content-transfer-encoding :mime-version:subject:date:from:x-gm-gg:x-gm-message-state:from:to :cc:subject:date:message-id:reply-to; bh=CkTKKula0DQd5wgE6uWfgSKH0QJhpKGSFifSGYvHSoo=; b=OVxk2vuM9TmvyKjrc6CtanP5qTc5UYzdB0Baom5FX6KF5yy1J8DwqddQhmYfSvE0GA rUYn2XnXPcbmmJYhqp+YCcbkFXxJgs7jplybw4QtlPWm6WVc7sjkrjpQ4gG15LRgwBri 4xWqxSLhmzWPyYKh2Ls5/omu3SJJnQZuKTWSQhrPB2ItICKv5j/eiR7T/e49tmLJFzUZ fTzwM+ny1hoshvPwD7vFL5oMoxpa2pLIwn/enTVdbb6wdyz7422kwZX4DONDrA5fHsy/ d/WdOyngV1Q1wsYKtDMc3OjxuEWCQPDdG64CTasD1kvIF2KS1pvVwf5pk4M5b0NhJ8FM Lz4Q== X-Forwarded-Encrypted: i=1; AJvYcCV7nlWBCZWwjjSYnGueSMba16epA5Sc0SNDFPG+9fltt0iG1CR0eOCnr+5h9G3nww6tr9eKui/xzvNJjxM=@vger.kernel.org X-Gm-Message-State: AOJu0YwMOZy+UHD06eTZsK1lDkGjwVjwSvIUYGJhDCx54qQ98MsiXuyf s0Cn5di8o/qRfzKYI/BM+4ZSYxWdFpCpP871vvUG3DRXiQoJk6aUC4zH X-Gm-Gg: AZuq6aKEmHech05lQaRjXPFOzdiSBupKKOLofHT7maoFJ6+mJ/6+kGP8M6gu/em3tSi IYmtXW/crM9kpqHzkKlzDjceZZupIx3vYYAszxDlPHgXyIr2nL/UU39zD032b0V0VJkhpichb/s PBYndJVHGG2/9cfsQcOxCBIW1MacPqFgEe1+tCbLsk2Ee2Fsw87N6tEw1kARdF/8tpm6osbTNup uDohY7d8wl6dn+LCGO9V4F5lnOf84oqjPOiwmpc53r+jH3wvov9cCGlVJCEKAIk2AlroKCGMRqX Y7kGZKaDtJpau4ep+IXabczBkGKf2hnz4KBV6ScOZNWNm3gdDQQOdG7ZzAoJfWXKVHniMeX2Rxp kcuXByqCS5ma4Prpk/ImqgM0FUF3TWhZ18p7oG+L6gBQxleLE+Racwh+wQn7osaJ0IZbYCmFbB3 snQGQIhJBIqql2o9wr6lVWz0FYpcwEYpMjE2xev2fY5uMMsN0cPfnwNwCd9Fg= X-Received: by 2002:a05:6a00:418c:b0:81e:81fb:b392 with SMTP id d2e1a72fcca58-823411dd0bbmr1927837b3a.11.1769363904271; Sun, 25 Jan 2026 09:58:24 -0800 (PST) Received: from [127.0.0.1] ([101.32.222.185]) by smtp.gmail.com with ESMTPSA id d2e1a72fcca58-8231876e718sm7405963b3a.62.2026.01.25.09.58.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 25 Jan 2026 09:58:23 -0800 (PST) From: Kairui Song Date: Mon, 26 Jan 2026 01:57:28 +0800 Subject: [PATCH 05/12] mm/workingset: leave highest bits empty for anon shadow Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable Message-Id: <20260126-swap-table-p3-v1-5-a74155fab9b0@tencent.com> References: <20260126-swap-table-p3-v1-0-a74155fab9b0@tencent.com> In-Reply-To: <20260126-swap-table-p3-v1-0-a74155fab9b0@tencent.com> To: linux-mm@kvack.org Cc: Andrew Morton , Kemeng Shi , Nhat Pham , Baoquan He , Barry Song , Johannes Weiner , David Hildenbrand , Lorenzo Stoakes , linux-kernel@vger.kernel.org, Chris Li , Kairui Song X-Mailer: b4 0.14.3 X-Developer-Signature: v=1; a=ed25519-sha256; t=1769363877; l=8883; i=kasong@tencent.com; s=kasong-sign-tencent; h=from:subject:message-id; bh=FaLMbQqW7ETAqIJMdO/kAwe98yiKvc5zz/kgiu2u1QQ=; b=zZFLWTrUx22bzcHL3DV9tNiGxIw22UXloT7umwsUfIDGmzimDMLLgL1mCbikMWqIVoer/2AwY jU8qKLCxKR+C3BqaGaZIwT3R3S++voh36ZJ1yDI6KK7CyQoaVFJQdrj X-Developer-Key: i=kasong@tencent.com; a=ed25519; pk=kCdoBuwrYph+KrkJnrr7Sm1pwwhGDdZKcKrqiK8Y1mI= From: Kairui Song Swap table entry will need 4 bits reserved for swap count in the shadow, so the anon shadow should have its leading 4 bits remain 0. This should be OK for the foreseeable future. Take 52 bits of physical address space as an example: for 4K pages, there would be at most 40 bits for addressable pages. Currently, we have 36 bits available (64 - 1 - 16 - 10 - 1, where XA_VALUE takes 1 bit for marker, MEM_CGROUP_ID_SHIFT takes 16 bits, NODES_SHIFT takes <=3D10 bits, WORKINGSET flags takes 1 bit). So in the worst case, we previously need to pack the 40 bits of address in 36 bits fields using a 64K bucket (bucket_order =3D 4). After this, the bucket will be increased to 1M. Which should be fine, as on such large machines, the working set size will be way larger than the bucket size. And for MGLRU's gen number tracking, it should be even more than enough, MGLRU's gen number (max_seq) increment is much slower compared to the eviction counter (nonresident_age). And after all, either the refault distance or the gen distance is only a hint that can tolerate inaccuracy just fine. And the 4 bits can be shrunk to 3, or extended to a higher value if needed later. Signed-off-by: Kairui Song --- mm/swap_table.h | 4 ++++ mm/workingset.c | 49 ++++++++++++++++++++++++++++++------------------- 2 files changed, 34 insertions(+), 19 deletions(-) diff --git a/mm/swap_table.h b/mm/swap_table.h index ea244a57a5b7..10e11d1f3b04 100644 --- a/mm/swap_table.h +++ b/mm/swap_table.h @@ -12,6 +12,7 @@ struct swap_table { }; =20 #define SWP_TABLE_USE_PAGE (sizeof(struct swap_table) =3D=3D PAGE_SIZE) +#define SWP_TB_COUNT_BITS 4 =20 /* * A swap table entry represents the status of a swap slot on a swap @@ -22,6 +23,9 @@ struct swap_table { * (shadow), or NULL. */ =20 +/* Macro for shadow offset calculation */ +#define SWAP_COUNT_SHIFT SWP_TB_COUNT_BITS + /* * Helpers for casting one type of info into a swap table entry. */ diff --git a/mm/workingset.c b/mm/workingset.c index 13422d304715..37a94979900f 100644 --- a/mm/workingset.c +++ b/mm/workingset.c @@ -16,6 +16,7 @@ #include #include #include +#include "swap_table.h" #include "internal.h" =20 /* @@ -184,7 +185,9 @@ #define EVICTION_SHIFT ((BITS_PER_LONG - BITS_PER_XA_VALUE) + \ WORKINGSET_SHIFT + NODES_SHIFT + \ MEM_CGROUP_ID_SHIFT) +#define EVICTION_SHIFT_ANON (EVICTION_SHIFT + SWAP_COUNT_SHIFT) #define EVICTION_MASK (~0UL >> EVICTION_SHIFT) +#define EVICTION_MASK_ANON (~0UL >> EVICTION_SHIFT_ANON) =20 /* * Eviction timestamps need to be able to cover the full range of @@ -194,12 +197,12 @@ * that case, we have to sacrifice granularity for distance, and group * evictions into coarser buckets by shaving off lower timestamp bits. */ -static unsigned int bucket_order __read_mostly; +static unsigned int bucket_order[ANON_AND_FILE] __read_mostly; =20 static void *pack_shadow(int memcgid, pg_data_t *pgdat, unsigned long evic= tion, - bool workingset) + bool workingset, bool file) { - eviction &=3D EVICTION_MASK; + eviction &=3D file ? EVICTION_MASK : EVICTION_MASK_ANON; eviction =3D (eviction << MEM_CGROUP_ID_SHIFT) | memcgid; eviction =3D (eviction << NODES_SHIFT) | pgdat->node_id; eviction =3D (eviction << WORKINGSET_SHIFT) | workingset; @@ -244,7 +247,8 @@ static void *lru_gen_eviction(struct folio *folio) struct mem_cgroup *memcg =3D folio_memcg(folio); struct pglist_data *pgdat =3D folio_pgdat(folio); =20 - BUILD_BUG_ON(LRU_GEN_WIDTH + LRU_REFS_WIDTH > BITS_PER_LONG - EVICTION_SH= IFT); + BUILD_BUG_ON(LRU_GEN_WIDTH + LRU_REFS_WIDTH > + BITS_PER_LONG - max(EVICTION_SHIFT, EVICTION_SHIFT_ANON)); =20 lruvec =3D mem_cgroup_lruvec(memcg, pgdat); lrugen =3D &lruvec->lrugen; @@ -254,7 +258,7 @@ static void *lru_gen_eviction(struct folio *folio) hist =3D lru_hist_from_seq(min_seq); atomic_long_add(delta, &lrugen->evicted[hist][type][tier]); =20 - return pack_shadow(mem_cgroup_private_id(memcg), pgdat, token, workingset= ); + return pack_shadow(mem_cgroup_private_id(memcg), pgdat, token, workingset= , type); } =20 /* @@ -262,7 +266,7 @@ static void *lru_gen_eviction(struct folio *folio) * Fills in @lruvec, @token, @workingset with the values unpacked from sha= dow. */ static bool lru_gen_test_recent(void *shadow, struct lruvec **lruvec, - unsigned long *token, bool *workingset) + unsigned long *token, bool *workingset, bool file) { int memcg_id; unsigned long max_seq; @@ -275,7 +279,7 @@ static bool lru_gen_test_recent(void *shadow, struct lr= uvec **lruvec, *lruvec =3D mem_cgroup_lruvec(memcg, pgdat); =20 max_seq =3D READ_ONCE((*lruvec)->lrugen.max_seq); - max_seq &=3D EVICTION_MASK >> LRU_REFS_WIDTH; + max_seq &=3D (file ? EVICTION_MASK : EVICTION_MASK_ANON) >> LRU_REFS_WIDT= H; =20 return abs_diff(max_seq, *token >> LRU_REFS_WIDTH) < MAX_NR_GENS; } @@ -293,7 +297,7 @@ static void lru_gen_refault(struct folio *folio, void *= shadow) =20 rcu_read_lock(); =20 - recent =3D lru_gen_test_recent(shadow, &lruvec, &token, &workingset); + recent =3D lru_gen_test_recent(shadow, &lruvec, &token, &workingset, type= ); if (lruvec !=3D folio_lruvec(folio)) goto unlock; =20 @@ -331,7 +335,7 @@ static void *lru_gen_eviction(struct folio *folio) } =20 static bool lru_gen_test_recent(void *shadow, struct lruvec **lruvec, - unsigned long *token, bool *workingset) + unsigned long *token, bool *workingset, bool file) { return false; } @@ -381,6 +385,7 @@ void workingset_age_nonresident(struct lruvec *lruvec, = unsigned long nr_pages) void *workingset_eviction(struct folio *folio, struct mem_cgroup *target_m= emcg) { struct pglist_data *pgdat =3D folio_pgdat(folio); + int file =3D folio_is_file_lru(folio); unsigned long eviction; struct lruvec *lruvec; int memcgid; @@ -397,10 +402,10 @@ void *workingset_eviction(struct folio *folio, struct= mem_cgroup *target_memcg) /* XXX: target_memcg can be NULL, go through lruvec */ memcgid =3D mem_cgroup_private_id(lruvec_memcg(lruvec)); eviction =3D atomic_long_read(&lruvec->nonresident_age); - eviction >>=3D bucket_order; + eviction >>=3D bucket_order[file]; workingset_age_nonresident(lruvec, folio_nr_pages(folio)); return pack_shadow(memcgid, pgdat, eviction, - folio_test_workingset(folio)); + folio_test_workingset(folio), file); } =20 /** @@ -431,14 +436,15 @@ bool workingset_test_recent(void *shadow, bool file, = bool *workingset, bool recent; =20 rcu_read_lock(); - recent =3D lru_gen_test_recent(shadow, &eviction_lruvec, &eviction, work= ingset); + recent =3D lru_gen_test_recent(shadow, &eviction_lruvec, &eviction, + workingset, file); rcu_read_unlock(); return recent; } =20 rcu_read_lock(); unpack_shadow(shadow, &memcgid, &pgdat, &eviction, workingset); - eviction <<=3D bucket_order; + eviction <<=3D bucket_order[file]; =20 /* * Look up the memcg associated with the stored ID. It might @@ -495,7 +501,8 @@ bool workingset_test_recent(void *shadow, bool file, bo= ol *workingset, * longest time, so the occasional inappropriate activation * leading to pressure on the active list is not a problem. */ - refault_distance =3D (refault - eviction) & EVICTION_MASK; + refault_distance =3D ((refault - eviction) & + (file ? EVICTION_MASK : EVICTION_MASK_ANON)); =20 /* * Compare the distance to the existing workingset size. We @@ -780,8 +787,8 @@ static struct lock_class_key shadow_nodes_key; =20 static int __init workingset_init(void) { + unsigned int timestamp_bits, timestamp_bits_anon; struct shrinker *workingset_shadow_shrinker; - unsigned int timestamp_bits; unsigned int max_order; int ret =3D -ENOMEM; =20 @@ -794,11 +801,15 @@ static int __init workingset_init(void) * double the initial memory by using totalram_pages as-is. */ timestamp_bits =3D BITS_PER_LONG - EVICTION_SHIFT; + timestamp_bits_anon =3D BITS_PER_LONG - EVICTION_SHIFT_ANON; max_order =3D fls_long(totalram_pages() - 1); - if (max_order > timestamp_bits) - bucket_order =3D max_order - timestamp_bits; - pr_info("workingset: timestamp_bits=3D%d max_order=3D%d bucket_order=3D%u= \n", - timestamp_bits, max_order, bucket_order); + if (max_order > (BITS_PER_LONG - EVICTION_SHIFT)) + bucket_order[WORKINGSET_FILE] =3D max_order - timestamp_bits; + if (max_order > timestamp_bits_anon) + bucket_order[WORKINGSET_ANON] =3D max_order - timestamp_bits_anon; + pr_info("workingset: timestamp_bits=3D%d (anon: %d) max_order=3D%d bucket= _order=3D%u (anon: %d)\n", + timestamp_bits, timestamp_bits_anon, max_order, + bucket_order[WORKINGSET_FILE], bucket_order[WORKINGSET_ANON]); =20 workingset_shadow_shrinker =3D shrinker_alloc(SHRINKER_NUMA_AWARE | SHRINKER_MEMCG_AWARE, --=20 2.52.0