From nobody Wed Dec 17 19:06:00 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id A9943C77B76 for ; Tue, 18 Apr 2023 06:25:20 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230425AbjDRGZT (ORCPT ); Tue, 18 Apr 2023 02:25:19 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58070 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229527AbjDRGZR (ORCPT ); Tue, 18 Apr 2023 02:25:17 -0400 Received: from mx.sberdevices.ru (mx.sberdevices.ru [45.89.227.171]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4D10E1BD8 for ; Mon, 17 Apr 2023 23:25:15 -0700 (PDT) Received: from s-lin-edge02.sberdevices.ru (localhost [127.0.0.1]) by mx.sberdevices.ru (Postfix) with ESMTP id 92E2F5FD0A; Tue, 18 Apr 2023 09:25:12 +0300 (MSK) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sberdevices.ru; s=mail; t=1681799112; bh=4Bi0KK9m/M8I5xBIw0piOBV6vW2Jir8/WPsX0rbFo1k=; h=From:To:Subject:Date:Message-ID:MIME-Version:Content-Type; b=bQF8kwlLnIL4gFlxGkjrHhSyOwxOthrfF5p7DqK7gxSGzrvCc+CuZ1WQqZa6feozl CgjmBHR/JufPBhpEhvo/gwUwJnSbysPWxaOl0NPKpjXS4UZ2wR8PcKEzRm4e74mnlP fJlBXog9u1ouVP3crp7QapLLlqnUDlDMG65Dc/1s9zl6fKek20B0c6ETveiYgiYloo JC2v749RnUTyv/q9LM/cjW+0u/GCxAhyhjR7gu/tYSqULeDQFz7tBCe9oBcblZzaPr mDyIk4nHDeBNNKOiKjew2s0zbQg0xrP8Vd6+yZk5teXivy6b3eJ2KSlMPQ6u6idwB4 kdH8DXp2vZoDw== Received: from S-MS-EXCH01.sberdevices.ru (S-MS-EXCH01.sberdevices.ru [172.16.1.4]) by mx.sberdevices.ru (Postfix) with ESMTP; Tue, 18 Apr 2023 09:25:12 +0300 (MSK) From: Alexey Romanov To: , , CC: , , , Alexey Romanov Subject: [RFC PATCH v1 1/5] mm/zsmalloc: use ARRAY_SIZE in isolate_zspage() Date: Tue, 18 Apr 2023 09:24:59 +0300 Message-ID: <20230418062503.62121-2-avromanov@sberdevices.ru> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20230418062503.62121-1-avromanov@sberdevices.ru> References: <20230418062503.62121-1-avromanov@sberdevices.ru> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [172.16.1.6] X-ClientProxiedBy: S-MS-EXCH02.sberdevices.ru (172.16.1.5) To S-MS-EXCH01.sberdevices.ru (172.16.1.4) X-KSMG-Rule-ID: 4 X-KSMG-Message-Action: clean X-KSMG-AntiSpam-Status: not scanned, disabled by settings X-KSMG-AntiSpam-Interceptor-Info: not scanned X-KSMG-AntiPhishing: not scanned, disabled by settings X-KSMG-AntiVirus: Kaspersky Secure Mail Gateway, version 1.1.2.30, bases: 2023/04/18 02:02:00 #21122658 X-KSMG-AntiVirus-Status: Clean, skipped Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Better not to use hardcoded constants. Signed-off-by: Alexey Romanov --- mm/zsmalloc.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c index 702bc3fd687a..f23c2da55368 100644 --- a/mm/zsmalloc.c +++ b/mm/zsmalloc.c @@ -1888,7 +1888,7 @@ static struct zspage *isolate_zspage(struct size_clas= s *class, bool source) fg[1] =3D ZS_ALMOST_EMPTY; } =20 - for (i =3D 0; i < 2; i++) { + for (i =3D 0; i < ARRAY_SIZE(fg); i++) { zspage =3D list_first_entry_or_null(&class->fullness_list[fg[i]], struct zspage, list); if (zspage) { --=20 2.38.1 From nobody Wed Dec 17 19:06:00 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id D2EDEC77B71 for ; Tue, 18 Apr 2023 06:25:23 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230468AbjDRGZW (ORCPT ); Tue, 18 Apr 2023 02:25:22 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58074 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229635AbjDRGZR (ORCPT ); Tue, 18 Apr 2023 02:25:17 -0400 Received: from mx.sberdevices.ru (mx.sberdevices.ru [45.89.227.171]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4C579119 for ; Mon, 17 Apr 2023 23:25:15 -0700 (PDT) Received: from s-lin-edge02.sberdevices.ru (localhost [127.0.0.1]) by mx.sberdevices.ru (Postfix) with ESMTP id 6C3205FD2D; Tue, 18 Apr 2023 09:25:13 +0300 (MSK) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sberdevices.ru; s=mail; t=1681799113; bh=MGvRlhAiObQPWXRtPHw4CKUcfUp8JzuszDoiiJD8eE8=; h=From:To:Subject:Date:Message-ID:MIME-Version:Content-Type; b=lgpZqY25RozH1jFqiti1K04fN66yH6C56ItLxROTuYVlC8RR+yVL0miskYjLbh8J9 pG5WMadPjAxMS34iDMd7GLEctQbqysJ3+1dpLUccPM8SjLh33xmUkDU2N1j7N1nVOj bfAs24rkvSwvk1ts1qKmzBty+3BS5DN/klj61TxD3Qfqp5++YDTdWTn8WHp5K5WTUU TRbSGBe4Eo7IgfZYvRuX4/xw+6BK0qNukM77rPYZwEWdBhLHCwC54LS9cE5yTYZm+e wfhJfiSDMi6HS8sdqw6KK1gZzPFj2VgG+V28ZiSK93hX4plnZFwEj5nlGoQUw67YUY YfKRNUXii1Tzw== Received: from S-MS-EXCH01.sberdevices.ru (S-MS-EXCH01.sberdevices.ru [172.16.1.4]) by mx.sberdevices.ru (Postfix) with ESMTP; Tue, 18 Apr 2023 09:25:13 +0300 (MSK) From: Alexey Romanov To: , , CC: , , , Alexey Romanov Subject: [RFC PATCH v1 2/5] mm/zsmalloc: get rid of PAGE_MASK Date: Tue, 18 Apr 2023 09:25:00 +0300 Message-ID: <20230418062503.62121-3-avromanov@sberdevices.ru> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20230418062503.62121-1-avromanov@sberdevices.ru> References: <20230418062503.62121-1-avromanov@sberdevices.ru> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [172.16.1.6] X-ClientProxiedBy: S-MS-EXCH02.sberdevices.ru (172.16.1.5) To S-MS-EXCH01.sberdevices.ru (172.16.1.4) X-KSMG-Rule-ID: 4 X-KSMG-Message-Action: clean X-KSMG-AntiSpam-Status: not scanned, disabled by settings X-KSMG-AntiSpam-Interceptor-Info: not scanned X-KSMG-AntiPhishing: not scanned, disabled by settings X-KSMG-AntiVirus: Kaspersky Secure Mail Gateway, version 1.1.2.30, bases: 2023/04/18 02:02:00 #21122658 X-KSMG-AntiVirus-Status: Clean, skipped Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Use offset_in_page() macro instead of 'val & ~PAGE_MASK' Signed-off-by: Alexey Romanov --- mm/zsmalloc.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c index f23c2da55368..0a3b11aa07a9 100644 --- a/mm/zsmalloc.c +++ b/mm/zsmalloc.c @@ -1425,7 +1425,7 @@ void *zs_map_object(struct zs_pool *pool, unsigned lo= ng handle, spin_unlock(&pool->lock); =20 class =3D zspage_class(pool, zspage); - off =3D (class->size * obj_idx) & ~PAGE_MASK; + off =3D offset_in_page(class->size * obj_idx); =20 local_lock(&zs_map_area.lock); area =3D this_cpu_ptr(&zs_map_area); @@ -1465,7 +1465,7 @@ void zs_unmap_object(struct zs_pool *pool, unsigned l= ong handle) obj_to_location(obj, &page, &obj_idx); zspage =3D get_zspage(page); class =3D zspage_class(pool, zspage); - off =3D (class->size * obj_idx) & ~PAGE_MASK; + off =3D offset_in_page(class->size * obj_idx); =20 area =3D this_cpu_ptr(&zs_map_area); if (off + class->size <=3D PAGE_SIZE) @@ -1522,7 +1522,7 @@ static unsigned long obj_malloc(struct zs_pool *pool, =20 offset =3D obj * class->size; nr_page =3D offset >> PAGE_SHIFT; - m_offset =3D offset & ~PAGE_MASK; + m_offset =3D offset_in_page(offset); m_page =3D get_first_page(zspage); =20 for (i =3D 0; i < nr_page; i++) @@ -1626,7 +1626,7 @@ static void obj_free(int class_size, unsigned long ob= j, unsigned long *handle) void *vaddr; =20 obj_to_location(obj, &f_page, &f_objidx); - f_offset =3D (class_size * f_objidx) & ~PAGE_MASK; + f_offset =3D offset_in_page(class_size * f_objidx); zspage =3D get_zspage(f_page); =20 vaddr =3D kmap_atomic(f_page); @@ -1718,8 +1718,8 @@ static void zs_object_copy(struct size_class *class, = unsigned long dst, obj_to_location(src, &s_page, &s_objidx); obj_to_location(dst, &d_page, &d_objidx); =20 - s_off =3D (class->size * s_objidx) & ~PAGE_MASK; - d_off =3D (class->size * d_objidx) & ~PAGE_MASK; + s_off =3D offset_in_page(class->size * s_objidx); + d_off =3D offset_in_page(class->size * d_objidx); =20 if (s_off + class->size > PAGE_SIZE) s_size =3D PAGE_SIZE - s_off; --=20 2.38.1 From nobody Wed Dec 17 19:06:00 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id ADFA3C77B71 for ; Tue, 18 Apr 2023 06:25:30 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S231137AbjDRGZ3 (ORCPT ); Tue, 18 Apr 2023 02:25:29 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58088 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230420AbjDRGZT (ORCPT ); Tue, 18 Apr 2023 02:25:19 -0400 Received: from mx.sberdevices.ru (mx.sberdevices.ru [45.89.227.171]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 065FEB9 for ; Mon, 17 Apr 2023 23:25:16 -0700 (PDT) Received: from s-lin-edge02.sberdevices.ru (localhost [127.0.0.1]) by mx.sberdevices.ru (Postfix) with ESMTP id 543335FD57; Tue, 18 Apr 2023 09:25:14 +0300 (MSK) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sberdevices.ru; s=mail; t=1681799114; bh=7MTjAhk17Jfa37m9fE/raeY17+lmbDG/IO4oS0JOBFU=; h=From:To:Subject:Date:Message-ID:MIME-Version:Content-Type; b=O0HhLxxiOVZ5ZvDS/LrzBmvgXC15q0/z2vM9JPSNZdJfoDfCs6AKI7FKjPhdT/Fft /e26Iya6eyxO4wRvxl1mAp+elaWp/5Vf/8d3cU44jS6ny3aZdN0mL+uEKP0jR7xX8W diRaxtrAmm+uIgxpl3wJANblOBCWJkh3rwjmhMsYB1jNCui0wfjlitv0fl0lRT9PHW MM76Z7+eSbqPJbPnTBLlkCdQ+u/CWnuCUBrTcOmXTBf+DBdnM3bL3no9Z92IfZyJK8 H59sPAoEub3g5IL8hpdVw/fQ4J2vWGydD6Engei+ubzUgXXJe9Sus1LwX/9dDrfzya 4I4Rufw+v85YQ== Received: from S-MS-EXCH01.sberdevices.ru (S-MS-EXCH01.sberdevices.ru [172.16.1.4]) by mx.sberdevices.ru (Postfix) with ESMTP; Tue, 18 Apr 2023 09:25:14 +0300 (MSK) From: Alexey Romanov To: , , CC: , , , Alexey Romanov Subject: [RFC PATCH v1 3/5] mm/zsmalloc: introduce objects folding mechanism Date: Tue, 18 Apr 2023 09:25:01 +0300 Message-ID: <20230418062503.62121-4-avromanov@sberdevices.ru> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20230418062503.62121-1-avromanov@sberdevices.ru> References: <20230418062503.62121-1-avromanov@sberdevices.ru> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [172.16.1.6] X-ClientProxiedBy: S-MS-EXCH02.sberdevices.ru (172.16.1.5) To S-MS-EXCH01.sberdevices.ru (172.16.1.4) X-KSMG-Rule-ID: 4 X-KSMG-Message-Action: clean X-KSMG-AntiSpam-Status: not scanned, disabled by settings X-KSMG-AntiSpam-Interceptor-Info: not scanned X-KSMG-AntiPhishing: not scanned, disabled by settings X-KSMG-AntiVirus: Kaspersky Secure Mail Gateway, version 1.1.2.30, bases: 2023/04/18 02:02:00 #21122658 X-KSMG-AntiVirus-Status: Clean, skipped Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" This patch adds a mechanism to scan every zspage in zs_pool and frees all identical objects, leaving only one in memory. All zsmalloc handles which reference this freed objects now refer to the same, not freed, object. To implement this mechanism, we sequentially scan every allocated object, counting the hash from the their contents and save the object value in the hash table (hlist_head). If the hash matches, we free the identical objects via obj_free() and update the link rbtree. To implement this mechanism a rbtree is needed. The tree node key is a reference to the object value, and the value is the number of references to this object. This is necessary for data consistency so that we do not obj_free() the object referenced by another handle object. Also, this mechanism fully compatible with zs_compact() function. Signed-off-by: Alexey Romanov --- include/linux/zsmalloc.h | 4 + mm/Kconfig | 9 + mm/zsmalloc.c | 470 ++++++++++++++++++++++++++++++++++++++- 3 files changed, 476 insertions(+), 7 deletions(-) diff --git a/include/linux/zsmalloc.h b/include/linux/zsmalloc.h index a48cd0ffe57d..a581283c68b4 100644 --- a/include/linux/zsmalloc.h +++ b/include/linux/zsmalloc.h @@ -36,6 +36,8 @@ enum zs_mapmode { struct zs_pool_stats { /* How many pages were migrated (freed) */ atomic_long_t pages_compacted; + /* How many pages were freed during objects folding */ + atomic_long_t pages_folded; }; =20 struct zs_pool; @@ -58,4 +60,6 @@ unsigned long zs_compact(struct zs_pool *pool); unsigned int zs_lookup_class_index(struct zs_pool *pool, unsigned int size= ); =20 void zs_pool_stats(struct zs_pool *pool, struct zs_pool_stats *stats); + +unsigned long zs_fold(struct zs_pool *pool); #endif diff --git a/mm/Kconfig b/mm/Kconfig index ff7b209dec05..f01ec9038101 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -191,6 +191,15 @@ config ZSMALLOC_STAT information to userspace via debugfs. If unsure, say N. =20 +config ZSMALLOC_FOLD + bool "Export zsmalloc objects folding mechanism" + depends on ZSMALLOC + default n + help + This option enables a mechanism for folding identical objects, + which can reduce the amount of zsmalloc used memory. + If unsure, say N. + menu "SLAB allocator options" =20 choice diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c index 0a3b11aa07a9..de6be26cca65 100644 --- a/mm/zsmalloc.c +++ b/mm/zsmalloc.c @@ -62,6 +62,8 @@ #include #include #include +#include +#include =20 #define ZSPAGE_MAGIC 0x58 =20 @@ -178,6 +180,7 @@ enum class_stat_type { CLASS_FULL, OBJ_ALLOCATED, OBJ_USED, + OBJ_FOLDED, NR_ZS_STAT_TYPE, }; =20 @@ -219,6 +222,14 @@ struct size_class { =20 unsigned int index; struct zs_size_stat stats; +#ifdef CONFIG_ZSMALLOC_FOLD + /* + * We use this rbtree only in zs_free() and __zs_fold(). + * This function always uses spin_lock(&pool->lock), so, + * no additional lock specifically for rbtree is needed. + */ + struct rb_root fold_rbtree; +#endif }; =20 /* @@ -253,6 +264,9 @@ struct zs_pool { struct size_class *size_class[ZS_SIZE_CLASSES]; struct kmem_cache *handle_cachep; struct kmem_cache *zspage_cachep; +#ifdef CONFIG_ZSMALLOC_FOLD + struct kmem_cache *fold_rbtree_cachep; +#endif =20 atomic_long_t pages_allocated; =20 @@ -307,6 +321,129 @@ struct mapping_area { enum zs_mapmode vm_mm; /* mapping mode */ }; =20 +#ifdef CONFIG_ZSMALLOC_FOLD +struct obj_hash_node { + unsigned long handle; + struct hlist_node next; +}; + +struct hash_table { + struct hlist_head *table; + struct kmem_cache *cachep; + size_t size; +}; + +struct fold_rbtree_node { + struct rb_node node; + unsigned long key; + unsigned int cnt; +}; + +static int fold_rbtree_key_cmp(const void *k, const struct rb_node *node) +{ + const struct fold_rbtree_node *entry =3D rb_entry(node, struct fold_rbtre= e_node, node); + unsigned long key =3D *(unsigned long *)k; + + if (entry->key =3D=3D key) + return 0; + + return key < entry->key ? -1 : 1; +} + +static struct fold_rbtree_node *fold_rbtree_search(struct rb_root *root, + unsigned long key) +{ + struct rb_node *node =3D + rb_find((void *)&key, root, fold_rbtree_key_cmp); + + if (!node) + return NULL; + + return rb_entry(node, struct fold_rbtree_node, node); +} + +static int fold_rbtree_node_cmp(struct rb_node *a_node, const struct rb_no= de *b_node) +{ + const struct fold_rbtree_node *a =3D rb_entry(a_node, struct fold_rbtree_= node, node); + + return fold_rbtree_key_cmp((void *)&a->key, b_node); +} + +static bool fold_rbtree_insert(struct rb_root *root, struct fold_rbtree_no= de *data) +{ + return rb_find_add(&data->node, root, fold_rbtree_node_cmp); +} + +static struct fold_rbtree_node *fold_rbtree_alloc_node(struct zs_pool *poo= l, + unsigned long key, unsigned int cnt) +{ + /* + * This function is called under a spinlock, + * so it is necessary to use GFP_ATOMIC flag + */ + struct fold_rbtree_node *node =3D + kmem_cache_alloc(pool->fold_rbtree_cachep, GFP_ATOMIC); + + if (!node) + return NULL; + + node->key =3D key; + node->cnt =3D cnt; + + return node; +} + +static void fold_rbtree_free_node(struct zs_pool *pool, struct fold_rbtree= _node *node) +{ + kmem_cache_free(pool->fold_rbtree_cachep, node); +} + +static void free_htable(struct hash_table *htable) +{ + size_t i; + struct hlist_node *tmp; + struct obj_hash_node *node; + + for (i =3D 0; i < htable->size; i++) { + hlist_for_each_entry_safe(node, tmp, &htable->table[i], next) { + hlist_del(&node->next); + kmem_cache_free(htable->cachep, node); + } + } + + vfree(htable->table); + kmem_cache_destroy(htable->cachep); +} + +static int init_htable(struct hash_table *htable, size_t size) +{ + size_t i; + + htable->size =3D size; + htable->table =3D vmalloc_array(htable->size, sizeof(struct hlist_head)); + if (!htable->table) + return -ENOMEM; + + htable->cachep =3D kmem_cache_create("fold_htable", sizeof(struct obj_has= h_node), + 0, 0, NULL); + if (!htable->cachep) { + vfree(htable->table); + return -ENOMEM; + } + + for (i =3D 0; i < htable->size; i++) + INIT_HLIST_HEAD(&htable->table[i]); + + return 0; +} + +static void insert_htable(struct hash_table *htable, struct obj_hash_node = *node, + unsigned long hash) +{ + hlist_add_head(&node->next, &htable->table[hash % htable->size]); +} +#endif /* CONFIG_ZSMALLOC_FOLD */ + /* huge object: pages_per_zspage =3D=3D 1 && maxobj_per_zspage =3D=3D 1 */ static void SetZsHugePage(struct zspage *zspage) { @@ -353,6 +490,18 @@ static int create_cache(struct zs_pool *pool) return 1; } =20 +#ifdef CONFIG_ZSMALLOC_FOLD + pool->fold_rbtree_cachep =3D kmem_cache_create("fold_rbtree", + sizeof(struct fold_rbtree_node), 0, 0, NULL); + if (!pool->fold_rbtree_cachep) { + kmem_cache_destroy(pool->handle_cachep); + kmem_cache_destroy(pool->zspage_cachep); + pool->handle_cachep =3D NULL; + pool->zspage_cachep =3D NULL; + return 1; + } +#endif + return 0; } =20 @@ -360,6 +509,9 @@ static void destroy_cache(struct zs_pool *pool) { kmem_cache_destroy(pool->handle_cachep); kmem_cache_destroy(pool->zspage_cachep); +#ifdef CONFIG_ZSMALLOC_FOLD + kmem_cache_destroy(pool->fold_rbtree_cachep); +#endif } =20 static unsigned long cache_alloc_handle(struct zs_pool *pool, gfp_t gfp) @@ -639,15 +791,15 @@ static int zs_stats_size_show(struct seq_file *s, voi= d *v) struct size_class *class; int objs_per_zspage; unsigned long class_almost_full, class_almost_empty; - unsigned long obj_allocated, obj_used, pages_used, freeable; + unsigned long obj_allocated, obj_used, pages_used, freeable, folded; unsigned long total_class_almost_full =3D 0, total_class_almost_empty =3D= 0; unsigned long total_objs =3D 0, total_used_objs =3D 0, total_pages =3D 0; - unsigned long total_freeable =3D 0; + unsigned long total_freeable =3D 0, total_folded =3D 0; =20 seq_printf(s, " %5s %5s %11s %12s %13s %10s %10s %16s %8s\n", "class", "size", "almost_full", "almost_empty", "obj_allocated", "obj_used", "pages_used", - "pages_per_zspage", "freeable"); + "pages_per_zspage", "freeable", "folded"); =20 for (i =3D 0; i < ZS_SIZE_CLASSES; i++) { class =3D pool->size_class[i]; @@ -660,6 +812,7 @@ static int zs_stats_size_show(struct seq_file *s, void = *v) class_almost_empty =3D zs_stat_get(class, CLASS_ALMOST_EMPTY); obj_allocated =3D zs_stat_get(class, OBJ_ALLOCATED); obj_used =3D zs_stat_get(class, OBJ_USED); + folded =3D zs_stat_get(class, OBJ_FOLDED); freeable =3D zs_can_compact(class); spin_unlock(&pool->lock); =20 @@ -668,10 +821,10 @@ static int zs_stats_size_show(struct seq_file *s, voi= d *v) class->pages_per_zspage; =20 seq_printf(s, " %5u %5u %11lu %12lu %13lu" - " %10lu %10lu %16d %8lu\n", + " %10lu %10lu %16d %8lu %8lu\n", i, class->size, class_almost_full, class_almost_empty, obj_allocated, obj_used, pages_used, - class->pages_per_zspage, freeable); + class->pages_per_zspage, freeable, folded); =20 total_class_almost_full +=3D class_almost_full; total_class_almost_empty +=3D class_almost_empty; @@ -679,13 +832,14 @@ static int zs_stats_size_show(struct seq_file *s, voi= d *v) total_used_objs +=3D obj_used; total_pages +=3D pages_used; total_freeable +=3D freeable; + total_folded +=3D folded } =20 seq_puts(s, "\n"); - seq_printf(s, " %5s %5s %11lu %12lu %13lu %10lu %10lu %16s %8lu\n", + seq_printf(s, " %5s %5s %11lu %12lu %13lu %10lu %10lu %16s %8lu %8lu\n", "Total", "", total_class_almost_full, total_class_almost_empty, total_objs, - total_used_objs, total_pages, "", total_freeable); + total_used_objs, total_pages, "", total_freeable, total_folded); =20 return 0; } @@ -1663,6 +1817,9 @@ void zs_free(struct zs_pool *pool, unsigned long hand= le) unsigned long obj; struct size_class *class; enum fullness_group fullness; +#ifdef CONFIG_ZSMALLOC_FOLD + struct fold_rbtree_node *node; +#endif =20 if (IS_ERR_OR_NULL((void *)handle)) return; @@ -1677,6 +1834,22 @@ void zs_free(struct zs_pool *pool, unsigned long han= dle) zspage =3D get_zspage(f_page); class =3D zspage_class(pool, zspage); =20 +#ifdef CONFIG_ZSMALLOC_FOLD + node =3D fold_rbtree_search(&class->fold_rbtree, obj); + if (node) { + node->cnt--; + + if (node->cnt) { + cache_free_handle(pool, handle); + spin_unlock(&pool->lock); + return; + } + + rb_erase(&node->node, &class->fold_rbtree); + fold_rbtree_free_node(pool, node); + } +#endif + class_stat_dec(class, OBJ_USED, 1); =20 #ifdef CONFIG_ZPOOL @@ -2345,6 +2518,286 @@ unsigned long zs_compact(struct zs_pool *pool) } EXPORT_SYMBOL_GPL(zs_compact); =20 +#ifdef CONFIG_ZSMALLOC_FOLD +struct zs_fold_control { + struct zspage *src_zspage; + unsigned long handle; + char *current_buf; + char *folded_buf; +}; + +static unsigned long obj_get_hash(void *src, size_t len) +{ + return xxhash(src, len, 0); +} + +static void read_object(void *buf, struct zs_pool *pool, unsigned long han= dle) +{ + struct zspage *zspage; + struct page *page; + unsigned long obj, off; + unsigned int obj_idx; + struct size_class *class; + struct mapping_area *area; + struct page *pages[2]; + void *ret; + + obj =3D handle_to_obj(handle); + obj_to_location(obj, &page, &obj_idx); + zspage =3D get_zspage(page); + + class =3D zspage_class(pool, zspage); + off =3D offset_in_page(class->size * obj_idx); + + local_lock(&zs_map_area.lock); + area =3D this_cpu_ptr(&zs_map_area); + area->vm_mm =3D ZS_MM_RO; + if (off + class->size <=3D PAGE_SIZE) { + area->vm_addr =3D kmap_local_page(page); + ret =3D area->vm_addr + off; + if (likely(!ZsHugePage(zspage))) + ret +=3D ZS_HANDLE_SIZE; + + memcpy(buf, ret, class->size); + kunmap_local(area->vm_addr); + goto out; + } + + pages[0] =3D page; + pages[1] =3D get_next_page(page); + + ret =3D __zs_map_object(area, pages, off, class->size); + if (likely(!ZsHugePage(zspage))) + ret +=3D ZS_HANDLE_SIZE; + + memcpy(buf, ret, class->size); + + __zs_unmap_object(area, pages, off, class->size); +out: + local_unlock(&zs_map_area.lock); +} + +static void fold_object(struct size_class *class, unsigned long obj) +{ + class_stat_dec(class, OBJ_USED, 1); + class_stat_inc(class, OBJ_FOLDED, 1); + obj_free(class->size, obj, NULL); +} + +static int zs_cmp_obj_and_fold(struct zs_pool *pool, struct size_class *cl= ass, + struct hash_table *htable, const struct zs_fold_control *fc) +{ + struct fold_rbtree_node *current_rbnode, *fold_rbnode; + struct obj_hash_node *hash_node; + struct zspage *current_zspage; + struct page *page; + unsigned long hash; + + read_object(fc->folded_buf, pool, fc->handle); + hash =3D obj_get_hash(fc->folded_buf, class->size); + + hlist_for_each_entry(hash_node, &htable->table[hash % htable->size], next= ) { + unsigned long current_obj, folded_obj; + int cmp; + + current_obj =3D handle_to_obj(hash_node->handle); + obj_to_page(current_obj, &page); + current_zspage =3D get_zspage(page); + + /* + * Because we can fold objects on the same zspage, + * current_zspage and fc->src_zspage can be equal and + * fc->src_zspage is already locked. + */ + if (current_zspage !=3D fc->src_zspage) + migrate_write_lock(current_zspage); + + read_object(fc->current_buf, pool, hash_node->handle); + cmp =3D memcmp(fc->folded_buf, fc->current_buf, class->size); + + if (!cmp) { + /* Skip the already folded objects */ + folded_obj =3D handle_to_obj(fc->handle); + if (current_obj =3D=3D folded_obj) + return 0; + + current_rbnode =3D fold_rbtree_search(&class->fold_rbtree, current_obj); + if (!current_rbnode) { + /* Two handles refer to an object */ + current_rbnode =3D fold_rbtree_alloc_node(pool, current_obj, 2); + if (!current_rbnode) { + if (current_zspage !=3D fc->src_zspage) + migrate_write_unlock(current_zspage); + + return -ENOMEM; + } + + fold_rbtree_insert(&class->fold_rbtree, current_rbnode); + } else { + current_rbnode->cnt++; + } + + record_obj(fc->handle, current_obj); + + /* + * This check is necessary in order to avoid freeing an object + * that someone already refers to. This situation can occur when + * there are repeated calls to zs_fold(). For example: + * + * [handle0] [handle1] [handle2] [handle3] [handle4] + * [obj0] [obj1] [obj2] [obj3] [obj4] + * + * Let's imagine that obj2 and obj3 are equal, and we call zs_fold(): + * + * [handle0] [handle1] [handle2] [handle3] [handle4] + * [obj0] [obj1] [obj2] [obj2] [obj4] + * + * Now, handle2 and handle3 refer to the obj2 object. Time passes, + * and now handle0 refers to obj0_n, which is equal to obj2: + * + * [handle0] [handle1] [handle2] [handle3] [handle4] + * [obj0_n] [obj1] [obj2] [obj2] [obj4] + * + * If we call the zs_fold() function again, we come to handle2, + * and we understand that the obj2 and obj0_n hashes are the same. + * We can't just free obj2 because handle3 also refers to it already! + */ + fold_rbnode =3D fold_rbtree_search(&class->fold_rbtree, folded_obj); + if (fold_rbnode) { + fold_rbnode->cnt--; + + if (!fold_rbnode->cnt) { + rb_erase(&fold_rbnode->node, &class->fold_rbtree); + fold_object(class, folded_obj); + fold_rbtree_free_node(pool, fold_rbnode); + } + } else { + fold_object(class, folded_obj); + } + + if (current_zspage !=3D fc->src_zspage) + migrate_write_unlock(current_zspage); + + return 0; + } else if (current_zspage !=3D fc->src_zspage) { + migrate_write_unlock(current_zspage); + } + } + + /* We use GFP_ATOMIC because we under spin-lock */ + hash_node =3D kmem_cache_alloc(htable->cachep, GFP_ATOMIC); + if (!hash_node) + return -ENOMEM; + + hash_node->handle =3D fc->handle; + insert_htable(htable, hash_node, hash); + return 0; +} + +static unsigned long __zs_fold(struct zs_pool *pool, struct size_class *cl= ass) +{ + struct zspage *src_zspage, *tmp; + struct zs_fold_control fc; + struct hash_table htable; + struct page *s_page; + unsigned long pages_freed =3D 0, handle; + enum fullness_group fg, newfg; + size_t htable_size; + int obj_idx; + + htable_size =3D zs_stat_get(class, OBJ_USED); + if (!htable_size) + return 0; + + init_htable(&htable, htable_size); + + /* + * We can't allocate these buffers inside zs_cmp_obj_and_fold() + * because the function is under spinlock. In this case, we have + * to use kmalloc with GFP_ATOMIC, but allocations happen very often. + * Therefore, it is better to allocate these two buffers here. + */ + fc.current_buf =3D kmalloc(class->size, GFP_KERNEL); + if (!fc.current_buf) + return 0; + + fc.folded_buf =3D kmalloc(class->size, GFP_KERNEL); + if (!fc.folded_buf) { + kfree(fc.current_buf); + return 0; + } + + spin_lock(&pool->lock); + + for (fg =3D ZS_ALMOST_EMPTY; fg <=3D ZS_FULL; fg++) { + list_for_each_entry_safe(src_zspage, tmp, &class->fullness_list[fg], lis= t) { + remove_zspage(class, src_zspage, fg); + migrate_write_lock(src_zspage); + + fc.src_zspage =3D src_zspage; + s_page =3D get_first_page(src_zspage); + obj_idx =3D 0; + + /* Iterate over all the objects on the zspage */ + while (1) { + handle =3D find_alloced_obj(class, s_page, &obj_idx); + if (!handle) { + s_page =3D get_next_page(s_page); + if (!s_page) + break; + + obj_idx =3D 0; + continue; + } + + fc.handle =3D handle; + /* + * Nothing bad will happen if we ignore the return code from + * zs_cmp_obj_and_fold(). Errors only occur in case of allocator + * failure. Furthermore, even if an error occurs, this function + * doesn't corrupt any data structures in any way. + */ + zs_cmp_obj_and_fold(pool, class, &htable, &fc); + obj_idx++; + } + + newfg =3D putback_zspage(class, src_zspage); + migrate_write_unlock(src_zspage); + if (newfg =3D=3D ZS_EMPTY) { + free_zspage(pool, class, src_zspage); + pages_freed +=3D class->pages_per_zspage; + } + } + } + + spin_unlock(&pool->lock); + free_htable(&htable); + kfree(fc.current_buf); + kfree(fc.folded_buf); + + return pages_freed; +} + +unsigned long zs_fold(struct zs_pool *pool) +{ + int i; + struct size_class *class; + unsigned long pages_freed =3D 0; + + for (i =3D ZS_SIZE_CLASSES - 1; i >=3D 0; i--) { + class =3D pool->size_class[i]; + if (class->index !=3D i) + continue; + + pages_freed +=3D __zs_fold(pool, class); + } + atomic_long_add(pages_freed, &pool->stats.pages_folded); + + return pages_freed; +} +EXPORT_SYMBOL_GPL(zs_fold); +#endif /* CONFIG_ZSMALLOC_FOLD */ + void zs_pool_stats(struct zs_pool *pool, struct zs_pool_stats *stats) { memcpy(stats, &pool->stats, sizeof(struct zs_pool_stats)); @@ -2496,6 +2949,9 @@ struct zs_pool *zs_create_pool(const char *name) class->index =3D i; class->pages_per_zspage =3D pages_per_zspage; class->objs_per_zspage =3D objs_per_zspage; +#ifdef CONFIG_ZSMALLOC_FOLD + class->fold_rbtree =3D RB_ROOT; +#endif pool->size_class[i] =3D class; for (fullness =3D ZS_EMPTY; fullness < NR_ZS_FULLNESS; fullness++) --=20 2.38.1 From nobody Wed Dec 17 19:06:00 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 80D60C77B76 for ; Tue, 18 Apr 2023 06:25:28 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230497AbjDRGZ0 (ORCPT ); Tue, 18 Apr 2023 02:25:26 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58082 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230064AbjDRGZS (ORCPT ); Tue, 18 Apr 2023 02:25:18 -0400 Received: from mx.sberdevices.ru (mx.sberdevices.ru [45.89.227.171]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E6FE11BDA for ; Mon, 17 Apr 2023 23:25:16 -0700 (PDT) Received: from s-lin-edge02.sberdevices.ru (localhost [127.0.0.1]) by mx.sberdevices.ru (Postfix) with ESMTP id 266405FD5B; Tue, 18 Apr 2023 09:25:15 +0300 (MSK) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sberdevices.ru; s=mail; t=1681799115; bh=nfgD8CjAt52uLbsRtpG0YlGKoerIw5fnKcSwfy+g/JE=; h=From:To:Subject:Date:Message-ID:MIME-Version:Content-Type; b=Du1ijEM7jXPQlYLuCqUkwMDR9U6uHlm6GPfjO87RePxjdfiJ3HQcQMppmp9vOJMtF xxWMiFe5HXJ+lG5kzPeyi5gj8ZGjqGKoNtjfjzvqGESrMKLf2uhwaj4AN68AtkxZD+ NZvOVJx1Rwgnsc/pqJ9JXd9y55I3ivJkOHeOwp03gKMw6KteyWXpcX/UoVO5D7oDsu DSpS8ifEQvhW3Vg5pfxkgCMeCZ9ps3vSLX/fyIEbti8oxad6yNwA3Ag6Um9OOMLOfn 17xdFTei6ZJBk+cPmnXN/ZGWsmujHQwqIgJqlB+MRUmLSvo9dFHza6kce8z8LBEcOe BmtLDmjOOYRgQ== Received: from S-MS-EXCH01.sberdevices.ru (S-MS-EXCH01.sberdevices.ru [172.16.1.4]) by mx.sberdevices.ru (Postfix) with ESMTP; Tue, 18 Apr 2023 09:25:15 +0300 (MSK) From: Alexey Romanov To: , , CC: , , , Alexey Romanov Subject: [RFC PATCH v1 4/5] zram: add fold sysfs knob Date: Tue, 18 Apr 2023 09:25:02 +0300 Message-ID: <20230418062503.62121-5-avromanov@sberdevices.ru> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20230418062503.62121-1-avromanov@sberdevices.ru> References: <20230418062503.62121-1-avromanov@sberdevices.ru> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [172.16.1.6] X-ClientProxiedBy: S-MS-EXCH02.sberdevices.ru (172.16.1.5) To S-MS-EXCH01.sberdevices.ru (172.16.1.4) X-KSMG-Rule-ID: 4 X-KSMG-Message-Action: clean X-KSMG-AntiSpam-Status: not scanned, disabled by settings X-KSMG-AntiSpam-Interceptor-Info: not scanned X-KSMG-AntiPhishing: not scanned, disabled by settings X-KSMG-AntiVirus: Kaspersky Secure Mail Gateway, version 1.1.2.30, bases: 2023/04/18 02:02:00 #21122658 X-KSMG-AntiVirus-Status: Clean, skipped Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Allow zram to fold identical zsmalloc objects into single one: echo 1 > /sys/block/zramX/fold Signed-off-by: Alexey Romanov --- drivers/block/zram/zram_drv.c | 25 +++++++++++++++++++++++++ 1 file changed, 25 insertions(+) diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c index e290d6d97047..06a614d1643d 100644 --- a/drivers/block/zram/zram_drv.c +++ b/drivers/block/zram/zram_drv.c @@ -1184,6 +1184,25 @@ static ssize_t compact_store(struct device *dev, return len; } =20 +#ifdef CONFIG_ZSMALLOC_FOLD +static ssize_t fold_store(struct device *dev, + struct device_attribute *attr, const char *buf, size_t len) +{ + struct zram *zram =3D dev_to_zram(dev); + + down_read(&zram->init_lock); + if (!init_done(zram)) { + up_read(&zram->init_lock); + return -EINVAL; + } + + zs_fold(zram->mem_pool); + up_read(&zram->init_lock); + + return len; +} +#endif + static ssize_t io_stat_show(struct device *dev, struct device_attribute *attr, char *buf) { @@ -2313,6 +2332,9 @@ static DEVICE_ATTR_RW(writeback_limit_enable); static DEVICE_ATTR_RW(recomp_algorithm); static DEVICE_ATTR_WO(recompress); #endif +#ifdef CONFIG_ZSMALLOC_FOLD +static DEVICE_ATTR_WO(fold); +#endif =20 static struct attribute *zram_disk_attrs[] =3D { &dev_attr_disksize.attr, @@ -2339,6 +2361,9 @@ static struct attribute *zram_disk_attrs[] =3D { #ifdef CONFIG_ZRAM_MULTI_COMP &dev_attr_recomp_algorithm.attr, &dev_attr_recompress.attr, +#endif +#ifdef CONFIG_ZSMALLOC_FOLD + &dev_attr_fold.attr, #endif NULL, }; --=20 2.38.1 From nobody Wed Dec 17 19:06:00 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C6B5CC77B71 for ; Tue, 18 Apr 2023 06:25:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230149AbjDRGZd (ORCPT ); Tue, 18 Apr 2023 02:25:33 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:58096 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229527AbjDRGZU (ORCPT ); Tue, 18 Apr 2023 02:25:20 -0400 Received: from mx.sberdevices.ru (mx.sberdevices.ru [45.89.227.171]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id CE50C119 for ; Mon, 17 Apr 2023 23:25:18 -0700 (PDT) Received: from s-lin-edge02.sberdevices.ru (localhost [127.0.0.1]) by mx.sberdevices.ru (Postfix) with ESMTP id E48BC5FD61; Tue, 18 Apr 2023 09:25:15 +0300 (MSK) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=sberdevices.ru; s=mail; t=1681799115; bh=NmVhwWOMVBSjx7NTACx4GB75SU5JQuvCsHOYaqd9NSk=; h=From:To:Subject:Date:Message-ID:MIME-Version:Content-Type; b=jTbfDR0vCa8wixwiTa1ZFpFkcYWqDSz7h34TOHXoCMYZDLuRKplSEvBhjoRu8Bp2X a88zG+i0WWVmtaixqMqIxvxWdHXS1T2J8Z6dHypHepW+44ytxQU9BNZtP2TsYtFrez mOyGV4lU52paulufz9uU3v22khFsroHTNkP3pTB/fvVHgqtPSm6pDm/sqbdEQLc7B5 FO9Y4d+YSDdd77SSg3fWoQr1VVQ2ORAly4jBUyRTMJvmBFvu/+HKBmq09CFFPPUz4I UJVFTGtMNWoQrCPNHGZ1W2feWtVHUtlINJxygGgn78j4caTZZBXFF9ewYNpTjvUyc2 xlkq5twUOFAuw== Received: from S-MS-EXCH01.sberdevices.ru (S-MS-EXCH01.sberdevices.ru [172.16.1.4]) by mx.sberdevices.ru (Postfix) with ESMTP; Tue, 18 Apr 2023 09:25:15 +0300 (MSK) From: Alexey Romanov To: , , CC: , , , Alexey Romanov Subject: [RFC PATCH v1 5/5] zram: add pages_folded to stats Date: Tue, 18 Apr 2023 09:25:03 +0300 Message-ID: <20230418062503.62121-6-avromanov@sberdevices.ru> X-Mailer: git-send-email 2.33.0 In-Reply-To: <20230418062503.62121-1-avromanov@sberdevices.ru> References: <20230418062503.62121-1-avromanov@sberdevices.ru> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Originating-IP: [172.16.1.6] X-ClientProxiedBy: S-MS-EXCH02.sberdevices.ru (172.16.1.5) To S-MS-EXCH01.sberdevices.ru (172.16.1.4) X-KSMG-Rule-ID: 4 X-KSMG-Message-Action: clean X-KSMG-AntiSpam-Status: not scanned, disabled by settings X-KSMG-AntiSpam-Interceptor-Info: not scanned X-KSMG-AntiPhishing: not scanned, disabled by settings X-KSMG-AntiVirus: Kaspersky Secure Mail Gateway, version 1.1.2.30, bases: 2023/04/18 02:02:00 #21122658 X-KSMG-AntiVirus-Status: Clean, skipped Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" This counter show how many objects folded into single one at the zsmalloc allocator level. Signed-off-by: Alexey Romanov --- Documentation/admin-guide/blockdev/zram.rst | 2 ++ drivers/block/zram/zram_drv.c | 5 +++-- 2 files changed, 5 insertions(+), 2 deletions(-) diff --git a/Documentation/admin-guide/blockdev/zram.rst b/Documentation/ad= min-guide/blockdev/zram.rst index e4551579cb12..349f13a2d310 100644 --- a/Documentation/admin-guide/blockdev/zram.rst +++ b/Documentation/admin-guide/blockdev/zram.rst @@ -209,6 +209,7 @@ compact WO trigger memory compaction debug_stat RO this file is used for zram debugging purposes backing_dev RW set up backend storage for zram to write out idle WO mark allocated slot as idle +fold WO trigger memory folding =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D =3D=3D= =3D=3D=3D=3D =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D =20 =20 @@ -267,6 +268,7 @@ line of text and contains the following stats separated= by whitespace: pages_compacted the number of pages freed during compaction huge_pages the number of incompressible pages huge_pages_since the number of incompressible pages since zram set up + pages_folded the number of pages freed during folding =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D =3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D =20 File /sys/block/zram/bd_stat diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c index 06a614d1643d..3012b297ade5 100644 --- a/drivers/block/zram/zram_drv.c +++ b/drivers/block/zram/zram_drv.c @@ -1242,7 +1242,7 @@ static ssize_t mm_stat_show(struct device *dev, max_used =3D atomic_long_read(&zram->stats.max_used_pages); =20 ret =3D scnprintf(buf, PAGE_SIZE, - "%8llu %8llu %8llu %8lu %8ld %8llu %8lu %8llu %8llu\n", + "%8llu %8llu %8llu %8lu %8ld %8llu %8lu %8llu %8llu %8lu\n", orig_size << PAGE_SHIFT, (u64)atomic64_read(&zram->stats.compr_data_size), mem_used << PAGE_SHIFT, @@ -1251,7 +1251,8 @@ static ssize_t mm_stat_show(struct device *dev, (u64)atomic64_read(&zram->stats.same_pages), atomic_long_read(&pool_stats.pages_compacted), (u64)atomic64_read(&zram->stats.huge_pages), - (u64)atomic64_read(&zram->stats.huge_pages_since)); + (u64)atomic64_read(&zram->stats.huge_pages_since), + atomic_long_read(&pool_stats.pages_folded)); up_read(&zram->init_lock); =20 return ret; --=20 2.38.1