cat /sys/kernel/debug/zswap/duplicate_entry
2086447
When testing, the duplicate_entry value is very high, but no warning
message in the kernel log. From the comment of duplicate_entry
"Duplicate store was encountered (rare)", it seems something goes wrong.
Actually it's incremented in the beginning of zswap_store(), which found
its zswap entry has already on the tree. And this is a normal case,
since the folio could leave zswap entry on the tree after swapin,
later it's dirtied and swapout/zswap_store again, found its original
zswap entry. (Maybe we can reuse it instead of invalidating it?)
So duplicate_entry should be only incremented in the real bug case,
which already have "WARN_ON(1)", it looks redundant to count bug case,
so this patch just remove it.
Signed-off-by: Chengming Zhou <zhouchengming@bytedance.com>
---
mm/zswap.c | 9 +--------
1 file changed, 1 insertion(+), 8 deletions(-)
diff --git a/mm/zswap.c b/mm/zswap.c
index 4381b7a2d4d6..3fbb7e2c8b8d 100644
--- a/mm/zswap.c
+++ b/mm/zswap.c
@@ -71,8 +71,6 @@ static u64 zswap_reject_compress_poor;
static u64 zswap_reject_alloc_fail;
/* Store failed because the entry metadata could not be allocated (rare) */
static u64 zswap_reject_kmemcache_fail;
-/* Duplicate store was encountered (rare) */
-static u64 zswap_duplicate_entry;
/* Shrinker work queue */
static struct workqueue_struct *shrink_wq;
@@ -1571,10 +1569,8 @@ bool zswap_store(struct folio *folio)
*/
spin_lock(&tree->lock);
entry = zswap_rb_search(&tree->rbroot, offset);
- if (entry) {
+ if (entry)
zswap_invalidate_entry(tree, entry);
- zswap_duplicate_entry++;
- }
spin_unlock(&tree->lock);
objcg = get_obj_cgroup_from_folio(folio);
if (objcg && !obj_cgroup_may_zswap(objcg)) {
@@ -1661,7 +1657,6 @@ bool zswap_store(struct folio *folio)
*/
while (zswap_rb_insert(&tree->rbroot, entry, &dupentry) == -EEXIST) {
WARN_ON(1);
- zswap_duplicate_entry++;
zswap_invalidate_entry(tree, dupentry);
}
if (entry->length) {
@@ -1822,8 +1817,6 @@ static int zswap_debugfs_init(void)
zswap_debugfs_root, &zswap_reject_compress_poor);
debugfs_create_u64("written_back_pages", 0444,
zswap_debugfs_root, &zswap_written_back_pages);
- debugfs_create_u64("duplicate_entry", 0444,
- zswap_debugfs_root, &zswap_duplicate_entry);
debugfs_create_u64("pool_total_size", 0444,
zswap_debugfs_root, &zswap_pool_total_size);
debugfs_create_atomic_t("stored_pages", 0444,
--
b4 0.10.1
On Thu, Feb 1, 2024 at 7:50 AM Chengming Zhou
<zhouchengming@bytedance.com> wrote:
>
> cat /sys/kernel/debug/zswap/duplicate_entry
> 2086447
>
> When testing, the duplicate_entry value is very high, but no warning
> message in the kernel log. From the comment of duplicate_entry
> "Duplicate store was encountered (rare)", it seems something goes wrong.
>
> Actually it's incremented in the beginning of zswap_store(), which found
> its zswap entry has already on the tree. And this is a normal case,
> since the folio could leave zswap entry on the tree after swapin,
> later it's dirtied and swapout/zswap_store again, found its original
> zswap entry. (Maybe we can reuse it instead of invalidating it?)
Interesting. So if we make invalidate load the only mode, this oddity
is gone as well?
>
> So duplicate_entry should be only incremented in the real bug case,
> which already have "WARN_ON(1)", it looks redundant to count bug case,
> so this patch just remove it.
But yeah, I have literally never checked this value (maybe I should
ha). I'm fine with removing it, unless someone has a strong case for
this counter?
For now:
Reviewed-by: Nhat Pham <nphamcs@gmail.com>
>
> Signed-off-by: Chengming Zhou <zhouchengming@bytedance.com>
> ---
> mm/zswap.c | 9 +--------
> 1 file changed, 1 insertion(+), 8 deletions(-)
>
> diff --git a/mm/zswap.c b/mm/zswap.c
> index 4381b7a2d4d6..3fbb7e2c8b8d 100644
> --- a/mm/zswap.c
> +++ b/mm/zswap.c
> @@ -71,8 +71,6 @@ static u64 zswap_reject_compress_poor;
> static u64 zswap_reject_alloc_fail;
> /* Store failed because the entry metadata could not be allocated (rare) */
> static u64 zswap_reject_kmemcache_fail;
> -/* Duplicate store was encountered (rare) */
> -static u64 zswap_duplicate_entry;
>
> /* Shrinker work queue */
> static struct workqueue_struct *shrink_wq;
> @@ -1571,10 +1569,8 @@ bool zswap_store(struct folio *folio)
> */
> spin_lock(&tree->lock);
> entry = zswap_rb_search(&tree->rbroot, offset);
> - if (entry) {
> + if (entry)
> zswap_invalidate_entry(tree, entry);
> - zswap_duplicate_entry++;
> - }
> spin_unlock(&tree->lock);
> objcg = get_obj_cgroup_from_folio(folio);
> if (objcg && !obj_cgroup_may_zswap(objcg)) {
> @@ -1661,7 +1657,6 @@ bool zswap_store(struct folio *folio)
> */
> while (zswap_rb_insert(&tree->rbroot, entry, &dupentry) == -EEXIST) {
> WARN_ON(1);
> - zswap_duplicate_entry++;
> zswap_invalidate_entry(tree, dupentry);
> }
> if (entry->length) {
> @@ -1822,8 +1817,6 @@ static int zswap_debugfs_init(void)
> zswap_debugfs_root, &zswap_reject_compress_poor);
> debugfs_create_u64("written_back_pages", 0444,
> zswap_debugfs_root, &zswap_written_back_pages);
> - debugfs_create_u64("duplicate_entry", 0444,
> - zswap_debugfs_root, &zswap_duplicate_entry);
> debugfs_create_u64("pool_total_size", 0444,
> zswap_debugfs_root, &zswap_pool_total_size);
> debugfs_create_atomic_t("stored_pages", 0444,
>
> --
> b4 0.10.1
On 2024/2/3 06:28, Nhat Pham wrote:
> On Thu, Feb 1, 2024 at 7:50 AM Chengming Zhou
> <zhouchengming@bytedance.com> wrote:
>>
>> cat /sys/kernel/debug/zswap/duplicate_entry
>> 2086447
>>
>> When testing, the duplicate_entry value is very high, but no warning
>> message in the kernel log. From the comment of duplicate_entry
>> "Duplicate store was encountered (rare)", it seems something goes wrong.
>>
>> Actually it's incremented in the beginning of zswap_store(), which found
>> its zswap entry has already on the tree. And this is a normal case,
>> since the folio could leave zswap entry on the tree after swapin,
>> later it's dirtied and swapout/zswap_store again, found its original
>> zswap entry. (Maybe we can reuse it instead of invalidating it?)
>
> Interesting. So if we make invalidate load the only mode, this oddity
> is gone as well?
Good point!
This oddity is why we need to invalidate it first at the beginning.
But there is another oddity that a stored folio maybe dirtied again,
so that folio needs to be writeback/stored for the second time, in
which case, we still need to invalidate it first to avoid WARN_ON later.
Thanks.
>>
>> So duplicate_entry should be only incremented in the real bug case,
>> which already have "WARN_ON(1)", it looks redundant to count bug case,
>> so this patch just remove it.
>
> But yeah, I have literally never checked this value (maybe I should
> ha). I'm fine with removing it, unless someone has a strong case for
> this counter?
>
> For now:
> Reviewed-by: Nhat Pham <nphamcs@gmail.com>
>
>>
>> Signed-off-by: Chengming Zhou <zhouchengming@bytedance.com>
>> ---
>> mm/zswap.c | 9 +--------
>> 1 file changed, 1 insertion(+), 8 deletions(-)
>>
>> diff --git a/mm/zswap.c b/mm/zswap.c
>> index 4381b7a2d4d6..3fbb7e2c8b8d 100644
>> --- a/mm/zswap.c
>> +++ b/mm/zswap.c
>> @@ -71,8 +71,6 @@ static u64 zswap_reject_compress_poor;
>> static u64 zswap_reject_alloc_fail;
>> /* Store failed because the entry metadata could not be allocated (rare) */
>> static u64 zswap_reject_kmemcache_fail;
>> -/* Duplicate store was encountered (rare) */
>> -static u64 zswap_duplicate_entry;
>>
>> /* Shrinker work queue */
>> static struct workqueue_struct *shrink_wq;
>> @@ -1571,10 +1569,8 @@ bool zswap_store(struct folio *folio)
>> */
>> spin_lock(&tree->lock);
>> entry = zswap_rb_search(&tree->rbroot, offset);
>> - if (entry) {
>> + if (entry)
>> zswap_invalidate_entry(tree, entry);
>> - zswap_duplicate_entry++;
>> - }
>> spin_unlock(&tree->lock);
>> objcg = get_obj_cgroup_from_folio(folio);
>> if (objcg && !obj_cgroup_may_zswap(objcg)) {
>> @@ -1661,7 +1657,6 @@ bool zswap_store(struct folio *folio)
>> */
>> while (zswap_rb_insert(&tree->rbroot, entry, &dupentry) == -EEXIST) {
>> WARN_ON(1);
>> - zswap_duplicate_entry++;
>> zswap_invalidate_entry(tree, dupentry);
>> }
>> if (entry->length) {
>> @@ -1822,8 +1817,6 @@ static int zswap_debugfs_init(void)
>> zswap_debugfs_root, &zswap_reject_compress_poor);
>> debugfs_create_u64("written_back_pages", 0444,
>> zswap_debugfs_root, &zswap_written_back_pages);
>> - debugfs_create_u64("duplicate_entry", 0444,
>> - zswap_debugfs_root, &zswap_duplicate_entry);
>> debugfs_create_u64("pool_total_size", 0444,
>> zswap_debugfs_root, &zswap_pool_total_size);
>> debugfs_create_atomic_t("stored_pages", 0444,
>>
>> --
>> b4 0.10.1
On Thu, Feb 1, 2024 at 7:50 AM Chengming Zhou <zhouchengming@bytedance.com> wrote: > > cat /sys/kernel/debug/zswap/duplicate_entry > 2086447 > > When testing, the duplicate_entry value is very high, but no warning > message in the kernel log. From the comment of duplicate_entry > "Duplicate store was encountered (rare)", it seems something goes wrong. > > Actually it's incremented in the beginning of zswap_store(), which found > its zswap entry has already on the tree. And this is a normal case, > since the folio could leave zswap entry on the tree after swapin, > later it's dirtied and swapout/zswap_store again, found its original > zswap entry. (Maybe we can reuse it instead of invalidating it?) > > So duplicate_entry should be only incremented in the real bug case, > which already have "WARN_ON(1)", it looks redundant to count bug case, > so this patch just remove it. > > Signed-off-by: Chengming Zhou <zhouchengming@bytedance.com> Acked-by: Yosry Ahmed <yosryahmed@google.com>
On Thu, Feb 01, 2024 at 03:49:04PM +0000, Chengming Zhou wrote: > cat /sys/kernel/debug/zswap/duplicate_entry > 2086447 > > When testing, the duplicate_entry value is very high, but no warning > message in the kernel log. From the comment of duplicate_entry > "Duplicate store was encountered (rare)", it seems something goes wrong. > > Actually it's incremented in the beginning of zswap_store(), which found > its zswap entry has already on the tree. And this is a normal case, > since the folio could leave zswap entry on the tree after swapin, > later it's dirtied and swapout/zswap_store again, found its original > zswap entry. (Maybe we can reuse it instead of invalidating it?) Probably not worth it, especially after the next patch. > So duplicate_entry should be only incremented in the real bug case, > which already have "WARN_ON(1)", it looks redundant to count bug case, > so this patch just remove it. > > Signed-off-by: Chengming Zhou <zhouchengming@bytedance.com> Either way, I agree that the WARN_ON() is more useful to point out a bug than a counter. Acked-by: Johannes Weiner <hannes@cmpxchg.org>
On 2024/2/2 01:55, Johannes Weiner wrote: > On Thu, Feb 01, 2024 at 03:49:04PM +0000, Chengming Zhou wrote: >> cat /sys/kernel/debug/zswap/duplicate_entry >> 2086447 >> >> When testing, the duplicate_entry value is very high, but no warning >> message in the kernel log. From the comment of duplicate_entry >> "Duplicate store was encountered (rare)", it seems something goes wrong. >> >> Actually it's incremented in the beginning of zswap_store(), which found >> its zswap entry has already on the tree. And this is a normal case, >> since the folio could leave zswap entry on the tree after swapin, >> later it's dirtied and swapout/zswap_store again, found its original >> zswap entry. (Maybe we can reuse it instead of invalidating it?) > > Probably not worth it, especially after the next patch. You are right, not worth it. > >> So duplicate_entry should be only incremented in the real bug case, >> which already have "WARN_ON(1)", it looks redundant to count bug case, >> so this patch just remove it. >> >> Signed-off-by: Chengming Zhou <zhouchengming@bytedance.com> > > Either way, I agree that the WARN_ON() is more useful to point out a > bug than a counter. > > Acked-by: Johannes Weiner <hannes@cmpxchg.org> Thanks!
© 2016 - 2025 Red Hat, Inc.