From nobody Tue Apr 7 18:51:33 2026 Received: from mail-ot1-f44.google.com (mail-ot1-f44.google.com [209.85.210.44]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8C28344D6AE for ; Thu, 26 Feb 2026 19:29:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.44 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772134190; cv=none; b=qc377HRLjGvXf2aYHHhepeSNqwbVPA6D4sPE7esXwVT2atsVkAH6Y475s398APUBSFfev8Bixd8nWkLNhU0AgG7GBjL5PC4PTWHaG9MER2opFh0SMxhLVLsus5FJt4lr2Hv49EGnRHu20k1boWG5GG8/Ibr5B3JqqUqi0XBjKJo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1772134190; c=relaxed/simple; bh=hjkM1RjyarEiiaVEBUIJZvvAWI5jh2AlKz8A4/ebMNA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=LEWsPDE6hrrtYtfwnMdLAzN19ddiM6diwDWpjPLLmL8J24NBsJ1MdVGJCHHGcNmpk/wsnfl5Ooe4DNfK6OtB3ZSOys4EtXl+frVV0i0niJze1T+xrMxR7c1kz9nQTKlN0fYRLIpDhSeA8qLR3HrshzojlQgyxYnqjN8CgqmVzlw= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=fhfVJxaw; arc=none smtp.client-ip=209.85.210.44 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="fhfVJxaw" Received: by mail-ot1-f44.google.com with SMTP id 46e09a7af769-7d196a2334fso1039646a34.1 for ; Thu, 26 Feb 2026 11:29:48 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1772134187; x=1772738987; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=oEfEA70JNXawqJbV01nYPcSs1BoCcq++8AXelGDLMgg=; b=fhfVJxaw0d0Mqlal0ER14frysDjcm22k5AEwRVIGu3ntFcSE5AdWi5RqUWuDJ9c+VK lmd8afrzW2GC3fBcpYipnW35kR5NbMqVolO4NjV3gqhKwejQxIQC7+gQkDAd44LfMEO4 UxmXbOXc/ipsWK3TqrJEhrNfQSKDGNFfxH3QNZMWOruHawrzzZZRYE/IShtPYwUHsJaq IIjFoxloOBqtl6DplaJcIZuY0aJHTo5fVnbjYV2miwrIVpqusDT2Rvh+YxuO2NA3/Qb1 IsjUB/Gt7rbAv+xs7iTAeT4x5TPydDQix+X0zqPSXFB9eGwH8HFvX0+Kc5gbF5/edn/h uILQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1772134187; x=1772738987; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=oEfEA70JNXawqJbV01nYPcSs1BoCcq++8AXelGDLMgg=; b=fJF62dPomJOiR5TKKGTsdWbuLZINWG9H6b1rX+g/+8GOFgD3iMurfi2PBaW8k0fJKi gLFDpXzZgxPgTmNwIuCQbPyO/IZLAh7EE1VIxxCHhPbnwMrytBE4xxXOjI0LPC/eszWV i+qYFGj32RbFNDiZNePRTaK2Zo/tIv1hvAxDbBiRYcvjAZ0RfzLYSuJFHD4NbvJ63QA4 Kf2crKkyFIEqWC+1dAx90f/oJ5vfBSSUqh56S9wzIr1oRfwGwbLaaPdiHCsU5E+ySIma abaAvr4Wnvd0iiOLh03x6VdX0me6Y1b15RFQ14EeliGmn6fPedEsm+ojnU/KV7f3CS6k 3UqQ== X-Forwarded-Encrypted: i=1; AJvYcCU/z0fv5mWNf2fnpRuvrKdYtDV12fAvPsACObqTNrMRbdZVtZyAWV7IiM5DALBZyaiVq+B6YnIPTJ1KfMs=@vger.kernel.org X-Gm-Message-State: AOJu0YxocTVKI3t05hxMqSVZiw+cFBNvtmbMOSC9yWAqkEm2Nmrz/HGN Ilx/NjTuOWdruGtuWcQkfuxP4g2okz9DqBqxZOfj2Li/29xlmbgNSp+F X-Gm-Gg: ATEYQzw3X4n2E+0yUXwRheOOgtod23J9la7NbLuJ+LD3dsneJ/hT9oLR03pKYfAWX6M YqyZ3qYPXv3QqMTIRbuS44yDdAvH3M/rRFQisSZ7sZdvEkkOp3yPVCw+9gRNtl1ktpP0p2L8Bp7 vBE73KtgsVU2gfrxXBEQmp+7PK8ybcHytoLnpv/fuH19nyI6JuX0visZ4Cy+0rGd/GiRh/JZeuO qVzbCXcxFXBZuVx2+pEpzvXyGozja40BLVch3I55XIrAlVfTGll3/P3G2ntf3IRwuaCicuDFhH1 g6SlXnCxay80QXd7PpXzTdu4YY7Z3WXMt+4KXiC0LTOvRu5Lk33Wcy8TUHKJ1jnHC7/TUvb6/dN rV7Iub2oOQKd0E0Sg8jktV6PRxQFdsBM5/pUJLPkFUCeGJwbgZr0IVgPObbIaWyDkCTg26VI0UH WPVOwllYG2mQrsMlZjpZuSTg== X-Received: by 2002:a05:6830:3986:b0:7c7:48b7:640a with SMTP id 46e09a7af769-7d5856d877bmr2357295a34.7.1772134187538; Thu, 26 Feb 2026 11:29:47 -0800 (PST) Received: from localhost ([2a03:2880:10ff:56::]) by smtp.gmail.com with ESMTPSA id 46e09a7af769-7d58644d426sm2404272a34.3.2026.02.26.11.29.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 26 Feb 2026 11:29:47 -0800 (PST) From: Joshua Hahn To: Minchan Kim , Sergey Senozhatsky Cc: Johannes Weiner , Yosry Ahmed , Nhat Pham , Nhat Pham , Chengming Zhou , Andrew Morton , linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@meta.com Subject: [PATCH 5/8] mm/zsmalloc,zswap: Redirect zswap_entry->obcg to zpdesc Date: Thu, 26 Feb 2026 11:29:28 -0800 Message-ID: <20260226192936.3190275-6-joshua.hahnjy@gmail.com> X-Mailer: git-send-email 2.47.3 In-Reply-To: <20260226192936.3190275-1-joshua.hahnjy@gmail.com> References: <20260226192936.3190275-1-joshua.hahnjy@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Now that obj_cgroups are tracked in zpdesc, redirect the zswap layer to use the pointer stored in the zpdesc and remove the pointer in struct zswap_entry. This offsets the temporary memory increase caused by the duplicate storage of the obj_cgroup pointer and results in a net zero memory footprint change. The lifetime and charging of the obj_cgroup is still handled in the zswap layer. Clean up mem_cgroup_from_entry, which has no more callers. Suggested-by: Johannes Weiner Signed-off-by: Joshua Hahn --- include/linux/zsmalloc.h | 1 + mm/zsmalloc.c | 29 +++++++++++++++++++++++ mm/zswap.c | 51 ++++++++++++++++++---------------------- 3 files changed, 53 insertions(+), 28 deletions(-) diff --git a/include/linux/zsmalloc.h b/include/linux/zsmalloc.h index 22f3baa13f24..05b2b163a427 100644 --- a/include/linux/zsmalloc.h +++ b/include/linux/zsmalloc.h @@ -38,6 +38,7 @@ unsigned long zs_get_total_pages(struct zs_pool *pool); unsigned long zs_compact(struct zs_pool *pool); =20 unsigned int zs_lookup_class_index(struct zs_pool *pool, unsigned int size= ); +struct obj_cgroup *zs_lookup_objcg(struct zs_pool *pool, unsigned long han= dle); =20 void zs_pool_stats(struct zs_pool *pool, struct zs_pool_stats *stats); =20 diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c index e5ae9a0fc78a..067215a6ddcc 100644 --- a/mm/zsmalloc.c +++ b/mm/zsmalloc.c @@ -977,6 +977,30 @@ static void migrate_obj_objcg(unsigned long used_obj, = unsigned long free_obj, zpdesc_set_obj_cgroup(d_zpdesc, d_obj_idx, size, objcg); zpdesc_set_obj_cgroup(s_zpdesc, s_obj_idx, size, NULL); } + +struct obj_cgroup *zs_lookup_objcg(struct zs_pool *pool, unsigned long han= dle) +{ + unsigned long obj; + struct zpdesc *zpdesc; + struct zspage *zspage; + struct size_class *class; + struct obj_cgroup *objcg; + unsigned int obj_idx; + + read_lock(&pool->lock); + obj =3D handle_to_obj(handle); + obj_to_location(obj, &zpdesc, &obj_idx); + + zspage =3D get_zspage(zpdesc); + zspage_read_lock(zspage); + read_unlock(&pool->lock); + + class =3D zspage_class(pool, zspage); + objcg =3D zpdesc_obj_cgroup(zpdesc, obj_idx, class->size); + zspage_read_unlock(zspage); + + return objcg; +} #else static inline struct obj_cgroup *zpdesc_obj_cgroup(struct zpdesc *zpdesc, unsigned int offset, @@ -996,6 +1020,11 @@ static bool alloc_zspage_objcgs(struct size_class *cl= ass, gfp_t gfp, =20 static void migrate_obj_objcg(unsigned long used_obj, unsigned long free_o= bj, int size) {} + +struct obj_cgroup *zs_lookup_objcg(struct zs_pool *pool, unsigned long han= dle) +{ + return NULL; +} #endif =20 static void create_page_chain(struct size_class *class, struct zspage *zsp= age, diff --git a/mm/zswap.c b/mm/zswap.c index 1e2d60f47919..55161a5c9d4c 100644 --- a/mm/zswap.c +++ b/mm/zswap.c @@ -193,7 +193,6 @@ struct zswap_entry { bool referenced; struct zswap_pool *pool; unsigned long handle; - struct obj_cgroup *objcg; struct list_head lru; }; =20 @@ -601,25 +600,13 @@ static int zswap_enabled_param_set(const char *val, * lru functions **********************************/ =20 -/* should be called under RCU */ -#ifdef CONFIG_MEMCG -static inline struct mem_cgroup *mem_cgroup_from_entry(struct zswap_entry = *entry) -{ - return entry->objcg ? obj_cgroup_memcg(entry->objcg) : NULL; -} -#else -static inline struct mem_cgroup *mem_cgroup_from_entry(struct zswap_entry = *entry) -{ - return NULL; -} -#endif - static inline int entry_to_nid(struct zswap_entry *entry) { return page_to_nid(virt_to_page(entry)); } =20 -static void zswap_lru_add(struct list_lru *list_lru, struct zswap_entry *e= ntry) +static void zswap_lru_add(struct list_lru *list_lru, struct zswap_entry *e= ntry, + struct obj_cgroup *objcg) { int nid =3D entry_to_nid(entry); struct mem_cgroup *memcg; @@ -636,19 +623,20 @@ static void zswap_lru_add(struct list_lru *list_lru, = struct zswap_entry *entry) * Similar reasoning holds for list_lru_del(). */ rcu_read_lock(); - memcg =3D mem_cgroup_from_entry(entry); + memcg =3D objcg ? obj_cgroup_memcg(objcg) : NULL; /* will always succeed */ list_lru_add(list_lru, &entry->lru, nid, memcg); rcu_read_unlock(); } =20 -static void zswap_lru_del(struct list_lru *list_lru, struct zswap_entry *e= ntry) +static void zswap_lru_del(struct list_lru *list_lru, struct zswap_entry *e= ntry, + struct obj_cgroup *objcg) { int nid =3D entry_to_nid(entry); struct mem_cgroup *memcg; =20 rcu_read_lock(); - memcg =3D mem_cgroup_from_entry(entry); + memcg =3D objcg ? obj_cgroup_memcg(objcg) : NULL; /* will always succeed */ list_lru_del(list_lru, &entry->lru, nid, memcg); rcu_read_unlock(); @@ -716,12 +704,16 @@ static void zswap_entry_cache_free(struct zswap_entry= *entry) */ static void zswap_entry_free(struct zswap_entry *entry) { - zswap_lru_del(&zswap_list_lru, entry); + struct obj_cgroup *objcg =3D zs_lookup_objcg(entry->pool->zs_pool, + entry->handle); + + zswap_lru_del(&zswap_list_lru, entry, objcg); zs_free(entry->pool->zs_pool, entry->handle); zswap_pool_put(entry->pool); - if (entry->objcg) { - obj_cgroup_uncharge_zswap(entry->objcg, entry->length); - obj_cgroup_put(entry->objcg); + + if (objcg) { + obj_cgroup_uncharge_zswap(objcg, entry->length); + obj_cgroup_put(objcg); } if (entry->length =3D=3D PAGE_SIZE) atomic_long_dec(&zswap_stored_incompressible_pages); @@ -994,6 +986,7 @@ static int zswap_writeback_entry(struct zswap_entry *en= try, struct mempolicy *mpol; bool folio_was_allocated; struct swap_info_struct *si; + struct obj_cgroup *objcg; int ret =3D 0; =20 /* try to allocate swap cache folio */ @@ -1043,8 +1036,9 @@ static int zswap_writeback_entry(struct zswap_entry *= entry, xa_erase(tree, offset); =20 count_vm_event(ZSWPWB); - if (entry->objcg) - count_objcg_events(entry->objcg, ZSWPWB, 1); + objcg =3D zs_lookup_objcg(entry->pool->zs_pool, entry->handle); + if (objcg) + count_objcg_events(objcg, ZSWPWB, 1); =20 zswap_entry_free(entry); =20 @@ -1463,11 +1457,10 @@ static bool zswap_store_page(struct page *page, */ entry->pool =3D pool; entry->swpentry =3D page_swpentry; - entry->objcg =3D objcg; entry->referenced =3D true; if (entry->length) { INIT_LIST_HEAD(&entry->lru); - zswap_lru_add(&zswap_list_lru, entry); + zswap_lru_add(&zswap_list_lru, entry, objcg); } =20 return true; @@ -1592,6 +1585,7 @@ int zswap_load(struct folio *folio) bool swapcache =3D folio_test_swapcache(folio); struct xarray *tree =3D swap_zswap_tree(swp); struct zswap_entry *entry; + struct obj_cgroup *objcg; =20 VM_WARN_ON_ONCE(!folio_test_locked(folio)); =20 @@ -1620,8 +1614,9 @@ int zswap_load(struct folio *folio) folio_mark_uptodate(folio); =20 count_vm_event(ZSWPIN); - if (entry->objcg) - count_objcg_events(entry->objcg, ZSWPIN, 1); + objcg =3D zs_lookup_objcg(entry->pool->zs_pool, entry->handle); + if (objcg) + count_objcg_events(objcg, ZSWPIN, 1); =20 /* * When reading into the swapcache, invalidate our entry. The --=20 2.47.3