From nobody Tue Apr 7 18:46:57 2026 Received: from mail-oi1-f178.google.com (mail-oi1-f178.google.com [209.85.167.178]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1CF1A383C83 for ; Wed, 11 Mar 2026 19:52:07 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.167.178 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773258735; cv=none; b=Q+pVNnB0AoSABwhszKYskZVSEqq0GBC9+CQ4HwjRPFl+UNn7vY+zAvfBD9/GxrgfX5+QXrNa4dP2gbtzGgfQDvMzec92YXp5/0MSFvXQXJGbXJyTPDIqsbYUwgvK8q1NjMJs5xdJGgzeBIOf7l1j/VMXoSUcpNl+ox5ugwHWxPc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1773258735; c=relaxed/simple; bh=dAUgueIEZ0YXut4fS4lJiG6S2nqFrA7mA7DBgHTcaq0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=q+iNxWinI263PRj3yKecST6Sbd6B0X+0sUZNnzTROwgnmpJGcBJv1rzb9QOqKEGJbwpVhjGslWVHNSXAVBKG/kT2fD6WyuvSJcOOJStSWMssG1ZrW+k24jNW7tn1Fev0tNPenHX0GEp52JWsX179Vjjwe3JvSL8iRsXr6RI7CLI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=grVR1c4e; arc=none smtp.client-ip=209.85.167.178 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="grVR1c4e" Received: by mail-oi1-f178.google.com with SMTP id 5614622812f47-464bba3a9easo225000b6e.0 for ; Wed, 11 Mar 2026 12:52:07 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1773258727; x=1773863527; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=e4vnZsb9mXNhtrNGbaJxIgoiKf+xedfYV6/25GSFY7Q=; b=grVR1c4ey3ufRp/etWAguwvOJXDqU8ZisNM0pjrvsGywWK9mBs/ufSOVSGKFRrdwwl f4+vdHVtGtTE/mML7wD8lpt+X1I7PuKwAPMqstHR7EYaQsg2QZAMt1kRJAZ3FCvsufoj nWFp9SyGSWlFuzO2df0egZQpImM2WtaEM9jdsUbMWKgtRtnVbT+fPNct/sod00SFmUaI ko+QdkI7q3dozCQZYhvlK5Z9YBb2dulhbQY8Kd+62Mx6dUtS0fLvoOtVn5ed/NxPAjX9 WaOq0Ie7Zk5vgIyJybYzrYi6EYdbR35FYXo+4CqrtXE56oIWL0Oon9qbrhX3LGLoWrzL CisQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1773258727; x=1773863527; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=e4vnZsb9mXNhtrNGbaJxIgoiKf+xedfYV6/25GSFY7Q=; b=M94PdAcP2JV+XE06aEum7VQ15cob8tYjINdmHHUFsntPhv6ESpd4IUJdptoNJxQcNf UM+56tDP6A6hbVmOQ4NoDLfyrvXBo5JUY/y8Nb8KfUbK0MYCD89OF6Bvrl3ymjIS2GZV RJxI8G9RGm7y5v3zYOMC0oNjS1LlcP2L/NsObPOQUpjry+iE1WQsid8PItAKR4Pyi+S0 E4tGprC9MZsXnWonWGVP8H/Amf4IpoitSoRz94piHmqUs2CKWaWZASa7WfqEojLXRn1W 600SISwpd6EOS0rjXdPRdJqF2nV7izGrBjYwzR59Md8WwCDCDbZp25O0DwiogvF1376I rnRw== X-Forwarded-Encrypted: i=1; AJvYcCU1Pl4DMJf0ykslAnN7usk0MzWJHk2FS12az68WFFg50mJjxwnDEX5AbkBLZZljjlqk7KehYgLuKYhdErU=@vger.kernel.org X-Gm-Message-State: AOJu0YylbubjkRp+WyLhroPxEesBC/DXigRvn6WvnoXjI51S6WWXLuI7 udq18cIms0bVk+1wimiyzGuwwb+2N1s9lFZiRF1vH+qV79V0r85NwAeg X-Gm-Gg: ATEYQzz6TKhN6fVzUi2Xasan/E9VTnvy/R+AJTc/O955fr3bxM18EuY9aZDvaMTMPgJ gPYnKO+YPQrf7W//6oloSutY/yNiaLBV3jALhU8+9496QuaD/nEeVbFG02bem7APHnqN7udZT11 6u3TYvpkNN91+2GO1ij2sA7i98tkqogFA9kqj60qBnnyaRejaz28oVbAWIJlEEo0Sg+1jpmiiFx g5VIxg3c/JOmEYiJ/+w4t8NBjx01i7ibhFas04v1PCzS6FnGJLhKlTeFFFciABbsqGYX5zYtd/t WTLZxepJEky2e/GGj0/blgi6ft0WzEKF7nN25tunnStyEw88LL7N8SW0QkjAh/ec9LBoZxvBaCS 1e7SH96ec3ZbsrVa8Z1SJ6h1J8GnZaX9CpJoLbMIkP9ZE/uzBMC8b6jDFvs/Ym5jww0zwGtU+BP JWYBzPnIqwikQiNzsZzSGUCZrBi1ste+aC X-Received: by 2002:a05:6820:c92:b0:67b:af79:4c1c with SMTP id 006d021491bc7-67bc887c1e1mr2569350eaf.2.1773258726845; Wed, 11 Mar 2026 12:52:06 -0700 (PDT) Received: from localhost ([2a03:2880:10ff:47::]) by smtp.gmail.com with ESMTPSA id 006d021491bc7-67bc9354e59sm1922332eaf.16.2026.03.11.12.52.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 11 Mar 2026 12:52:06 -0700 (PDT) From: Joshua Hahn To: Minchan Kim , Sergey Senozhatsky Cc: Johannes Weiner , Yosry Ahmed , Nhat Pham , Nhat Pham , Chengming Zhou , Michal Hocko , Roman Gushchin , Shakeel Butt , Muchun Song , Axel Rasmussen , Yuanchu Xie , Wei Xu , David Hildenbrand , Lorenzo Stoakes , "Liam R . Howlett" , Vlastimil Babka , Mike Rapoport , Suren Baghdasaryan , Andrew Morton , cgroups@vger.kernel.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org, kernel-team@meta.com Subject: [PATCH 09/11] mm/vmstat, memcontrol: Track ZSWAP_B, ZSWAPPED_B per-memcg-lruvec Date: Wed, 11 Mar 2026 12:51:46 -0700 Message-ID: <20260311195153.4013476-10-joshua.hahnjy@gmail.com> X-Mailer: git-send-email 2.52.0 In-Reply-To: <20260311195153.4013476-1-joshua.hahnjy@gmail.com> References: <20260311195153.4013476-1-joshua.hahnjy@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Now that memcg charging happens in the zsmalloc layer where we have both objcg and page information, we can specify which node's memcg lruvec zswapped memory should be accounted to. Move MEMCG_ZSWAP_B and MEMCG_ZSWAPPED_B from enum memcg_stat_item to enum node_stat_item. Reanme their prefixes from MEMCG to NR to reflect this move as well. In addition, decouple the updates of node stats (vmstat) and memcg-lruvec stats, since node stats can only track values at a PAGE_SIZE granularity. As a result of tracking zswap statistics at a finer granularity, the charging from zsmalloc also gets more complicated to cover the cases when the compressed object spans two zpdescs, which both live on different nodes. In this case, the memcg-lruvec of both node-memcg combinations are partially charged. memcg-lruvec stats are now updated precisely and proportionally when compressed objects are split across pages. Unfortunately for node stats, only NR_ZSWAP_B can be kept accurate. NR_ZSWAPPED_B works as a good best-effort value, but cannot proportionally account for compressed objects split across nodes due to the coarse PAGE_SIZE granularity of node stats. For such objects, NR_ZSWAPPED_B is accounted to the first zpdesc's node stats. Note that this is not a new inaccuracy, but one that is simply left unable to be fixed as part of these changes. The small inaccuracy is accepted in place of invasive changes across all of vmstat infrastructure to begin tracking stats at byte granularity. Finally, note that handling of objcg migrations across zspages (and their subsequent migrations across nodes) are handled in the next patch. Suggested-by: Johannes Weiner Signed-off-by: Joshua Hahn --- include/linux/memcontrol.h | 5 +- include/linux/mmzone.h | 2 + include/linux/zsmalloc.h | 6 +-- mm/memcontrol.c | 22 ++++---- mm/vmstat.c | 2 + mm/zsmalloc.c | 104 +++++++++++++++++++++++++++---------- mm/zswap.c | 7 ++- 7 files changed, 102 insertions(+), 46 deletions(-) diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h index ce2e598b5963..b03501e0c09b 100644 --- a/include/linux/memcontrol.h +++ b/include/linux/memcontrol.h @@ -37,8 +37,6 @@ enum memcg_stat_item { MEMCG_PERCPU_B, MEMCG_VMALLOC, MEMCG_KMEM, - MEMCG_ZSWAP_B, - MEMCG_ZSWAPPED_B, MEMCG_NR_STAT, }; =20 @@ -927,6 +925,9 @@ struct mem_cgroup *mem_cgroup_get_oom_group(struct task= _struct *victim, struct mem_cgroup *oom_domain); void mem_cgroup_print_oom_group(struct mem_cgroup *memcg); =20 +void mod_memcg_lruvec_state(struct lruvec *lruvec, enum node_stat_item idx, + int val); + /* idx can be of type enum memcg_stat_item or node_stat_item */ void mod_memcg_state(struct mem_cgroup *memcg, enum memcg_stat_item idx, int val); diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 3e51190a55e4..ae16a90491ac 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -258,6 +258,8 @@ enum node_stat_item { #ifdef CONFIG_HUGETLB_PAGE NR_HUGETLB, #endif + NR_ZSWAP_B, + NR_ZSWAPPED_B, NR_BALLOON_PAGES, NR_KERNEL_FILE_PAGES, NR_VM_NODE_STAT_ITEMS diff --git a/include/linux/zsmalloc.h b/include/linux/zsmalloc.h index 6010d8dac9ff..fd79916c7740 100644 --- a/include/linux/zsmalloc.h +++ b/include/linux/zsmalloc.h @@ -24,11 +24,11 @@ struct zs_pool_stats { struct zs_pool; struct scatterlist; struct obj_cgroup; -enum memcg_stat_item; +enum node_stat_item; =20 struct zs_pool *zs_create_pool(const char *name, bool memcg_aware, - enum memcg_stat_item compressed_stat, - enum memcg_stat_item uncompressed_stat); + enum node_stat_item compressed_stat, + enum node_stat_item uncompressed_stat); void zs_destroy_pool(struct zs_pool *pool); =20 unsigned long zs_malloc(struct zs_pool *pool, size_t size, gfp_t flags, diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 1cb02d2febe8..d87bc4beff16 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -333,6 +333,8 @@ static const unsigned int memcg_node_stat_items[] =3D { #ifdef CONFIG_HUGETLB_PAGE NR_HUGETLB, #endif + NR_ZSWAP_B, + NR_ZSWAPPED_B, }; =20 static const unsigned int memcg_stat_items[] =3D { @@ -341,8 +343,6 @@ static const unsigned int memcg_stat_items[] =3D { MEMCG_PERCPU_B, MEMCG_VMALLOC, MEMCG_KMEM, - MEMCG_ZSWAP_B, - MEMCG_ZSWAPPED_B, }; =20 #define NR_MEMCG_NODE_STAT_ITEMS ARRAY_SIZE(memcg_node_stat_items) @@ -737,9 +737,8 @@ unsigned long memcg_page_state_local(struct mem_cgroup = *memcg, int idx) } #endif =20 -static void mod_memcg_lruvec_state(struct lruvec *lruvec, - enum node_stat_item idx, - int val) +void mod_memcg_lruvec_state(struct lruvec *lruvec, enum node_stat_item idx, + int val) { struct mem_cgroup_per_node *pn; struct mem_cgroup *memcg; @@ -766,6 +765,7 @@ static void mod_memcg_lruvec_state(struct lruvec *lruve= c, =20 put_cpu(); } +EXPORT_SYMBOL(mod_memcg_lruvec_state); =20 /** * mod_lruvec_state - update lruvec memory statistics @@ -1363,8 +1363,8 @@ static const struct memory_stat memory_stats[] =3D { { "vmalloc", MEMCG_VMALLOC }, { "shmem", NR_SHMEM }, #ifdef CONFIG_ZSWAP - { "zswap", MEMCG_ZSWAP_B }, - { "zswapped", MEMCG_ZSWAPPED_B }, + { "zswap", NR_ZSWAP_B }, + { "zswapped", NR_ZSWAPPED_B }, #endif { "file_mapped", NR_FILE_MAPPED }, { "file_dirty", NR_FILE_DIRTY }, @@ -1411,8 +1411,8 @@ static int memcg_page_state_unit(int item) { switch (item) { case MEMCG_PERCPU_B: - case MEMCG_ZSWAP_B: - case MEMCG_ZSWAPPED_B: + case NR_ZSWAP_B: + case NR_ZSWAPPED_B: case NR_SLAB_RECLAIMABLE_B: case NR_SLAB_UNRECLAIMABLE_B: return 1; @@ -5482,7 +5482,7 @@ bool obj_cgroup_may_zswap(struct obj_cgroup *objcg) =20 /* Force flush to get accurate stats for charging */ __mem_cgroup_flush_stats(memcg, true); - pages =3D memcg_page_state(memcg, MEMCG_ZSWAP_B) / PAGE_SIZE; + pages =3D memcg_page_state(memcg, NR_ZSWAP_B) / PAGE_SIZE; if (pages < max) continue; ret =3D false; @@ -5511,7 +5511,7 @@ static u64 zswap_current_read(struct cgroup_subsys_st= ate *css, struct mem_cgroup *memcg =3D mem_cgroup_from_css(css); =20 mem_cgroup_flush_stats(memcg); - return memcg_page_state(memcg, MEMCG_ZSWAP_B); + return memcg_page_state(memcg, NR_ZSWAP_B); } =20 static int zswap_max_show(struct seq_file *m, void *v) diff --git a/mm/vmstat.c b/mm/vmstat.c index 86b14b0f77b5..389ff986ceac 100644 --- a/mm/vmstat.c +++ b/mm/vmstat.c @@ -1279,6 +1279,8 @@ const char * const vmstat_text[] =3D { #ifdef CONFIG_HUGETLB_PAGE [I(NR_HUGETLB)] =3D "nr_hugetlb", #endif + [I(NR_ZSWAP_B)] =3D "zswap", + [I(NR_ZSWAPPED_B)] =3D "zswapped", [I(NR_BALLOON_PAGES)] =3D "nr_balloon_pages", [I(NR_KERNEL_FILE_PAGES)] =3D "nr_kernel_file_pages", #undef I diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c index 24665d7cd4a9..ab085961b0e2 100644 --- a/mm/zsmalloc.c +++ b/mm/zsmalloc.c @@ -216,8 +216,8 @@ struct zs_pool { struct work_struct free_work; #endif bool memcg_aware; - enum memcg_stat_item compressed_stat; - enum memcg_stat_item uncompressed_stat; + enum node_stat_item compressed_stat; + enum node_stat_item uncompressed_stat; /* protect zspage migration/compaction */ rwlock_t lock; atomic_t compaction_in_progress; @@ -823,6 +823,9 @@ static void __free_zspage(struct zs_pool *pool, struct = size_class *class, reset_zpdesc(zpdesc); zpdesc_unlock(zpdesc); zpdesc_dec_zone_page_state(zpdesc); + if (pool->memcg_aware) + dec_node_page_state(zpdesc_page(zpdesc), + pool->compressed_stat); zpdesc_put(zpdesc); zpdesc =3D next; } while (zpdesc !=3D NULL); @@ -974,6 +977,9 @@ static struct zspage *alloc_zspage(struct zs_pool *pool, __zpdesc_set_zsmalloc(zpdesc); =20 zpdesc_inc_zone_page_state(zpdesc); + if (pool->memcg_aware) + inc_node_page_state(zpdesc_page(zpdesc), + pool->compressed_stat); zpdescs[i] =3D zpdesc; } =20 @@ -985,6 +991,9 @@ static struct zspage *alloc_zspage(struct zs_pool *pool, err: while (--i >=3D 0) { zpdesc_dec_zone_page_state(zpdescs[i]); + if (pool->memcg_aware) + dec_node_page_state(zpdesc_page(zpdescs[i]), + pool->compressed_stat); free_zpdesc(zpdescs[i]); } if (pool->memcg_aware) @@ -1029,10 +1038,48 @@ static bool zspage_empty(struct zspage *zspage) } =20 #ifdef CONFIG_MEMCG -static void zs_charge_objcg(struct zs_pool *pool, struct obj_cgroup *objcg, - int size) +static void __zs_mod_memcg_lruvec(struct zs_pool *pool, struct zpdesc *zpd= esc, + struct obj_cgroup *objcg, int size, + int sign, unsigned long offset) { struct mem_cgroup *memcg; + struct lruvec *lruvec; + int compressed_size =3D size, original_size =3D PAGE_SIZE; + int nid =3D page_to_nid(zpdesc_page(zpdesc)); + int next_nid =3D nid; + + if (offset + size > PAGE_SIZE) { + struct zpdesc *next_zpdesc =3D get_next_zpdesc(zpdesc); + + next_nid =3D page_to_nid(zpdesc_page(next_zpdesc)); + if (nid !=3D next_nid) { + compressed_size =3D PAGE_SIZE - offset; + original_size =3D (PAGE_SIZE * compressed_size) / size; + } + } + + rcu_read_lock(); + memcg =3D obj_cgroup_memcg(objcg); + lruvec =3D mem_cgroup_lruvec(memcg, NODE_DATA(nid)); + mod_memcg_lruvec_state(lruvec, pool->compressed_stat, + sign * compressed_size); + mod_memcg_lruvec_state(lruvec, pool->uncompressed_stat, + sign * original_size); + + if (nid !=3D next_nid) { + lruvec =3D mem_cgroup_lruvec(memcg, NODE_DATA(next_nid)); + mod_memcg_lruvec_state(lruvec, pool->compressed_stat, + sign * (size - compressed_size)); + mod_memcg_lruvec_state(lruvec, pool->uncompressed_stat, + sign * (PAGE_SIZE - original_size)); + } + rcu_read_unlock(); +} + +static void zs_charge_objcg(struct zs_pool *pool, struct zpdesc *zpdesc, + struct obj_cgroup *objcg, int size, + unsigned long offset) +{ =20 if (!cgroup_subsys_on_dfl(memory_cgrp_subsys)) return; @@ -1044,18 +1091,19 @@ static void zs_charge_objcg(struct zs_pool *pool, s= truct obj_cgroup *objcg, if (obj_cgroup_charge(objcg, GFP_KERNEL, size)) VM_WARN_ON_ONCE(1); =20 - rcu_read_lock(); - memcg =3D obj_cgroup_memcg(objcg); - mod_memcg_state(memcg, pool->compressed_stat, size); - mod_memcg_state(memcg, pool->uncompressed_stat, PAGE_SIZE); - rcu_read_unlock(); + __zs_mod_memcg_lruvec(pool, zpdesc, objcg, size, 1, offset); + + /* + * Node-level vmstats are charged in PAGE_SIZE units. As a best-effort, + * always charge the uncompressed stats to the first zpdesc. + */ + inc_node_page_state(zpdesc_page(zpdesc), pool->uncompressed_stat); } =20 -static void zs_uncharge_objcg(struct zs_pool *pool, struct obj_cgroup *obj= cg, - int size) +static void zs_uncharge_objcg(struct zs_pool *pool, struct zpdesc *zpdesc, + struct obj_cgroup *objcg, int size, + unsigned long offset) { - struct mem_cgroup *memcg; - if (!cgroup_subsys_on_dfl(memory_cgrp_subsys)) return; =20 @@ -1063,20 +1111,24 @@ static void zs_uncharge_objcg(struct zs_pool *pool,= struct obj_cgroup *objcg, =20 obj_cgroup_uncharge(objcg, size); =20 - rcu_read_lock(); - memcg =3D obj_cgroup_memcg(objcg); - mod_memcg_state(memcg, pool->compressed_stat, -size); - mod_memcg_state(memcg, pool->uncompressed_stat, -(int)PAGE_SIZE); - rcu_read_unlock(); + __zs_mod_memcg_lruvec(pool, zpdesc, objcg, size, -1, offset); + + /* + * Node-level vmstats are charged in PAGE_SIZE units. As a best-effort, + * always uncharged the uncompressed stats from the first zpdesc. + */ + dec_node_page_state(zpdesc_page(zpdesc), pool->uncompressed_stat); } #else -static void zs_charge_objcg(struct zs_pool *pool, struct obj_cgroup *objcg, - int size) +static void zs_charge_objcg(struct zs_pool *pool, struct zpdesc *zpdesc, + struct obj_cgroup *objcg, int size, + unsigned long offset) { } =20 -static void zs_uncharge_objcg(struct zs_pool *pool, struct obj_cgroup *obj= cg, - int size) +static void zs_uncharge_objcg(struct zs_pool *pool, struct zpdesc *zpdesc, + struct obj_cgroup *objcg, int size, + unsigned long offset) { } #endif @@ -1298,7 +1350,7 @@ void zs_obj_write(struct zs_pool *pool, unsigned long= handle, WARN_ON_ONCE(!pool->memcg_aware); zspage->objcgs[obj_idx] =3D objcg; obj_cgroup_get(objcg); - zs_charge_objcg(pool, objcg, class->size); + zs_charge_objcg(pool, zpdesc, objcg, class->size, off); } =20 if (!ZsHugePage(zspage)) @@ -1477,7 +1529,7 @@ static void obj_free(int class_size, unsigned long ob= j) if (pool->memcg_aware && zspage->objcgs[f_objidx]) { struct obj_cgroup *objcg =3D zspage->objcgs[f_objidx]; =20 - zs_uncharge_objcg(pool, objcg, class_size); + zs_uncharge_objcg(pool, f_zpdesc, objcg, class_size, f_offset); obj_cgroup_put(objcg); zspage->objcgs[f_objidx] =3D NULL; } @@ -2191,8 +2243,8 @@ static int calculate_zspage_chain_size(int class_size) * otherwise NULL. */ struct zs_pool *zs_create_pool(const char *name, bool memcg_aware, - enum memcg_stat_item compressed_stat, - enum memcg_stat_item uncompressed_stat) + enum node_stat_item compressed_stat, + enum node_stat_item uncompressed_stat) { int i; struct zs_pool *pool; diff --git a/mm/zswap.c b/mm/zswap.c index d81e2db4490b..2e9352b46693 100644 --- a/mm/zswap.c +++ b/mm/zswap.c @@ -256,8 +256,7 @@ static struct zswap_pool *zswap_pool_create(char *compr= essor) =20 /* unique name for each pool specifically required by zsmalloc */ snprintf(name, 38, "zswap%x", atomic_inc_return(&zswap_pools_count)); - pool->zs_pool =3D zs_create_pool(name, true, MEMCG_ZSWAP_B, - MEMCG_ZSWAPPED_B); + pool->zs_pool =3D zs_create_pool(name, true, NR_ZSWAP_B, NR_ZSWAPPED_B); if (!pool->zs_pool) goto error; =20 @@ -1214,9 +1213,9 @@ static unsigned long zswap_shrinker_count(struct shri= nker *shrinker, */ if (!mem_cgroup_disabled()) { mem_cgroup_flush_stats(memcg); - nr_backing =3D memcg_page_state(memcg, MEMCG_ZSWAP_B); + nr_backing =3D memcg_page_state(memcg, NR_ZSWAP_B); nr_backing >>=3D PAGE_SHIFT; - nr_stored =3D memcg_page_state(memcg, MEMCG_ZSWAPPED_B); + nr_stored =3D memcg_page_state(memcg, NR_ZSWAPPED_B); nr_stored >>=3D PAGE_SHIFT; } else { nr_backing =3D zswap_total_pages(); --=20 2.52.0