mm/zswap.c | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-)
From: Chengming Zhou <zhouchengming@bytedance.com>
We have to invalidate any duplicate entry even when !zswap_enabled
since zswap can be disabled anytime. If the folio store success before,
then got dirtied again but zswap disabled, we won't invalidate the old
duplicate entry in the zswap_store(). So later lru writeback may
overwrite the new data in swapfile.
Fixes: 42c06a0e8ebe ("mm: kill frontswap")
Cc: <stable@vger.kernel.org>
Signed-off-by: Chengming Zhou <zhouchengming@bytedance.com>
---
mm/zswap.c | 6 +++++-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/mm/zswap.c b/mm/zswap.c
index fe7ee2640c69..32633d0597dc 100644
--- a/mm/zswap.c
+++ b/mm/zswap.c
@@ -1516,7 +1516,7 @@ bool zswap_store(struct folio *folio)
if (folio_test_large(folio))
return false;
- if (!zswap_enabled || !tree)
+ if (!tree)
return false;
/*
@@ -1531,6 +1531,10 @@ bool zswap_store(struct folio *folio)
zswap_invalidate_entry(tree, dupentry);
}
spin_unlock(&tree->lock);
+
+ if (!zswap_enabled)
+ return false;
+
objcg = get_obj_cgroup_from_folio(folio);
if (objcg && !obj_cgroup_may_zswap(objcg)) {
memcg = get_mem_cgroup_from_objcg(objcg);
--
2.40.1
On Thu, 8 Feb 2024 02:32:54 +0000 chengming.zhou@linux.dev wrote:
> From: Chengming Zhou <zhouchengming@bytedance.com>
>
> We have to invalidate any duplicate entry even when !zswap_enabled
> since zswap can be disabled anytime. If the folio store success before,
> then got dirtied again but zswap disabled, we won't invalidate the old
> duplicate entry in the zswap_store(). So later lru writeback may
> overwrite the new data in swapfile.
>
> ...
>
> --- a/mm/zswap.c
> +++ b/mm/zswap.c
> @@ -1516,7 +1516,7 @@ bool zswap_store(struct folio *folio)
> if (folio_test_large(folio))
> return false;
>
> - if (!zswap_enabled || !tree)
> + if (!tree)
> return false;
>
> /*
> @@ -1531,6 +1531,10 @@ bool zswap_store(struct folio *folio)
> zswap_invalidate_entry(tree, dupentry);
> }
> spin_unlock(&tree->lock);
> +
> + if (!zswap_enabled)
> + return false;
> +
> objcg = get_obj_cgroup_from_folio(folio);
> if (objcg && !obj_cgroup_may_zswap(objcg)) {
> memcg = get_mem_cgroup_from_objcg(objcg);
OK, thanks.
I saw only one reject from mm-unstable patches. Your patch "mm/zswap:
make sure each swapfile always have zswap rb-tree" now does
--- a/mm/zswap.c~mm-zswap-make-sure-each-swapfile-always-have-zswap-rb-tree
+++ a/mm/zswap.c
@@ -1518,9 +1518,6 @@ bool zswap_store(struct folio *folio)
if (folio_test_large(folio))
return false;
- if (!tree)
- return false;
-
/*
* If this is a duplicate, it must be removed before attempting to store
* it, otherwise, if the store fails the old page won't be removed from
On 2024/2/9 05:09, Andrew Morton wrote:
> On Thu, 8 Feb 2024 02:32:54 +0000 chengming.zhou@linux.dev wrote:
>
>> From: Chengming Zhou <zhouchengming@bytedance.com>
>>
>> We have to invalidate any duplicate entry even when !zswap_enabled
>> since zswap can be disabled anytime. If the folio store success before,
>> then got dirtied again but zswap disabled, we won't invalidate the old
>> duplicate entry in the zswap_store(). So later lru writeback may
>> overwrite the new data in swapfile.
>>
>> ...
>>
>> --- a/mm/zswap.c
>> +++ b/mm/zswap.c
>> @@ -1516,7 +1516,7 @@ bool zswap_store(struct folio *folio)
>> if (folio_test_large(folio))
>> return false;
>>
>> - if (!zswap_enabled || !tree)
>> + if (!tree)
>> return false;
>>
>> /*
>> @@ -1531,6 +1531,10 @@ bool zswap_store(struct folio *folio)
>> zswap_invalidate_entry(tree, dupentry);
>> }
>> spin_unlock(&tree->lock);
>> +
>> + if (!zswap_enabled)
>> + return false;
>> +
>> objcg = get_obj_cgroup_from_folio(folio);
>> if (objcg && !obj_cgroup_may_zswap(objcg)) {
>> memcg = get_mem_cgroup_from_objcg(objcg);
>
> OK, thanks.
>
> I saw only one reject from mm-unstable patches. Your patch "mm/zswap:
> make sure each swapfile always have zswap rb-tree" now does
It's correct. Thanks!
The other patch that includes optimization and cleanup is updated based on
mm-unstable and just resend:
https://lore.kernel.org/all/20240209044112.3883835-1-chengming.zhou@linux.dev/
>
> --- a/mm/zswap.c~mm-zswap-make-sure-each-swapfile-always-have-zswap-rb-tree
> +++ a/mm/zswap.c
> @@ -1518,9 +1518,6 @@ bool zswap_store(struct folio *folio)
> if (folio_test_large(folio))
> return false;
>
> - if (!tree)
> - return false;
> -
> /*
> * If this is a duplicate, it must be removed before attempting to store
> * it, otherwise, if the store fails the old page won't be removed from
>
>
On Thu, Feb 08, 2024 at 02:32:54AM +0000, chengming.zhou@linux.dev wrote:
> From: Chengming Zhou <zhouchengming@bytedance.com>
>
> We have to invalidate any duplicate entry even when !zswap_enabled
> since zswap can be disabled anytime. If the folio store success before,
> then got dirtied again but zswap disabled, we won't invalidate the old
> duplicate entry in the zswap_store(). So later lru writeback may
> overwrite the new data in swapfile.
>
> Fixes: 42c06a0e8ebe ("mm: kill frontswap")
> Cc: <stable@vger.kernel.org>
> Signed-off-by: Chengming Zhou <zhouchengming@bytedance.com>
Acked-by: Johannes Weiner <hannes@cmpxchg.org>
Nice, this is easier to backport and should be less disruptive to
mm-unstable as well. It makes sense to me to put the optimization and
cleanup that was cut out into a separate patch on top of mm-unstable.
> mm/zswap.c | 6 +++++-
> 1 file changed, 5 insertions(+), 1 deletion(-)
>
> diff --git a/mm/zswap.c b/mm/zswap.c
> index fe7ee2640c69..32633d0597dc 100644
> --- a/mm/zswap.c
> +++ b/mm/zswap.c
> @@ -1516,7 +1516,7 @@ bool zswap_store(struct folio *folio)
> if (folio_test_large(folio))
> return false;
>
> - if (!zswap_enabled || !tree)
> + if (!tree)
> return false;
>
> /*
> @@ -1531,6 +1531,10 @@ bool zswap_store(struct folio *folio)
> zswap_invalidate_entry(tree, dupentry);
> }
> spin_unlock(&tree->lock);
> +
> + if (!zswap_enabled)
> + return false;
> +
> objcg = get_obj_cgroup_from_folio(folio);
> if (objcg && !obj_cgroup_may_zswap(objcg)) {
> memcg = get_mem_cgroup_from_objcg(objcg);
© 2016 - 2026 Red Hat, Inc.