.../selftests/cgroup/lib/cgroup_util.c | 18 ++--- .../cgroup/lib/include/cgroup_util.h | 4 +- tools/testing/selftests/cgroup/test_core.c | 2 +- tools/testing/selftests/cgroup/test_freezer.c | 2 +- .../selftests/cgroup/test_memcontrol.c | 15 ++-- tools/testing/selftests/cgroup/test_zswap.c | 79 +++++++++++++------ 6 files changed, 74 insertions(+), 46 deletions(-)
This patchset aims to fix various spurious failures and improve the overall
robustness of the cgroup zswap selftests.
The primary motivation is to make the tests compatible with architectures
that use non-4K page sizes (such as 64K on ppc64le and arm64). Currently,
the tests rely heavily on hardcoded 4K page sizes and fixed memory limits.
On 64K page size systems, these hardcoded values lead to sub-page granularity
accesses, incorrect page count calculations, and insufficient memory pressure
to trigger zswap writeback, ultimately causing the tests to fail.
Additionally, this series addresses OOM kills occurring in test_swapin_nozswap
by dynamically scaling memory limits, and prevents spurious test failures
when zswap is built into the kernel but globally disabled.
Changes in v4:
Patch 2: Use %zu format specifier when printing pagesize.
Patch 4: Use page_size instead of BUF_SIZE in test_memcontrol.c.
Patch 6: Print the expected swap amount in KB instead of MB.
Tested on v6.12:
across x86_64, aarch64, ppc64le archs.
Li Wang (7):
selftests/cgroup: skip test_zswap if zswap is globally disabled
selftests/cgroup: avoid OOM in test_swapin_nozswap
selftests/cgroup: use runtime page size for zswpin check
selftests/cgroup: rename PAGE_SIZE to BUF_SIZE in cgroup_util
selftests/cgroup: replace hardcoded page size values in test_zswap
selftest/cgroup: fix zswap test_no_invasive_cgroup_shrink on large
pagesize system
selftest/cgroup: fix zswap attempt_writeback() on 64K pagesize system
.../selftests/cgroup/lib/cgroup_util.c | 18 ++---
.../cgroup/lib/include/cgroup_util.h | 4 +-
tools/testing/selftests/cgroup/test_core.c | 2 +-
tools/testing/selftests/cgroup/test_freezer.c | 2 +-
.../selftests/cgroup/test_memcontrol.c | 15 ++--
tools/testing/selftests/cgroup/test_zswap.c | 79 +++++++++++++------
6 files changed, 74 insertions(+), 46 deletions(-)
--
2.53.0
On Sun, 22 Mar 2026 14:10:31 +0800 Li Wang <liwang@redhat.com> wrote: > This patchset aims to fix various spurious failures and improve the overall > robustness of the cgroup zswap selftests. AI review has questions: https://sashiko.dev/#/patchset/20260322061038.156146-1-liwang@redhat.com
On Sun, Mar 22, 2026 at 09:18:51AM -0700, Andrew Morton wrote:
> On Sun, 22 Mar 2026 14:10:31 +0800 Li Wang <liwang@redhat.com> wrote:
>
> > This patchset aims to fix various spurious failures and improve the overall
> > robustness of the cgroup zswap selftests.
>
> AI review has questions:
> https://sashiko.dev/#/patchset/20260322061038.156146-1-liwang@redhat.com
> [Sashiko comments in patch 4/7]
> ...
> Could we update this loop, along with the identical loops in
> alloc_anon_noexit() and alloc_anon_50M_check_swap() shown below, to use
> sysconf(_SC_PAGESIZE) instead?
I found that Waiman submit another patch that do same thing like this
suggestion. I'd consider to merge that one into my patch 4/7.
So, let me talk to Waiman first.
> [Sashiko comments in patch 5/7]
> ..
> if (zswpin < MB(24) / sysconf(_SC_PAGESIZE)) {
> Should these also be updated to use the new global pagesize variable for
> consistency? Subsequent patches in the series do not seem to correct this
> omission.
Good catch, that remainds should be corrected too.
> If control_allocation is NULL, wouldn't the loop immediately dereference it
> and cause an unhandled segmentation fault rather than a graceful test
> failure?
That's right, but better to be resolved in another series, not this one.
> However, there does not appear to be a corresponding munmap() call in the
> test's cleanup path. Although the OS reclaims this memory when the test
> process exits, should this explicit unmap be added for a balanced resource
> lifecycle within the test?
That's right, but better to be resolved in another series, not this one.
> [Sashiko comments in patch 6/7]
> ...
> If malloc returns a null pointer in a memory-constrained environment, the
> loop will unconditionally dereference it. Should there be a null check
> before the loop?
That's right, but better to be resolved in another series, not this one.
> The test data is generated by writing a single 'a' character per page, leaving
> the rest zero-filled:
> for (int i = 0; i < control_allocation_size; i += pagesize)
> control_allocation[i] = 'a';
> This makes the data highly compressible. Because memory.max is set to half of
> control_allocation_size, 512 pages are pushed into zswap.
> 512 pages of mostly zeros can compress down to roughly 11 to 15 kilobytes
> using compressors like zstd, which is well below the 65536 byte (64k)
> zswap.max limit on a 64k page system.
> Since the limit might not be reached, writeback might never trigger,
> causing the test to falsely fail. Should the test use incompressible data
> or a lower fixed limit?
If Sashiko suggests reducing compressibility, we'd need to fill a significant
fraction of each page with varied data, but that would work against the test:
zswap would reject poorly compressing pages and send them straight to swap,
and memory.stat:zswapped might never reach the threshold the test checks
with cg_read_key_long(..., "zswapped") < 1.
So, at most I'd keep the data highly compressible and just ensure non-zero,
unique-per-page markers.
i.e.
control_allocation[i] = (char)((i / pagesize) + 1);
--
Regards,
Li Wang
On Sun, Mar 22, 2026 at 8:23 PM Li Wang <liwang@redhat.com> wrote: > > On Sun, Mar 22, 2026 at 09:18:51AM -0700, Andrew Morton wrote: > > On Sun, 22 Mar 2026 14:10:31 +0800 Li Wang <liwang@redhat.com> wrote: > > > > > This patchset aims to fix various spurious failures and improve the overall > > > robustness of the cgroup zswap selftests. > > > > AI review has questions: > > https://sashiko.dev/#/patchset/20260322061038.156146-1-liwang@redhat.com > > > [Sashiko comments in patch 4/7] > > ... > > Could we update this loop, along with the identical loops in > > alloc_anon_noexit() and alloc_anon_50M_check_swap() shown below, to use > > sysconf(_SC_PAGESIZE) instead? > > I found that Waiman submit another patch that do same thing like this > suggestion. I'd consider to merge that one into my patch 4/7. > > So, let me talk to Waiman first. Probably fits better in your patch. > > The test data is generated by writing a single 'a' character per page, leaving > > the rest zero-filled: > > > for (int i = 0; i < control_allocation_size; i += pagesize) > > control_allocation[i] = 'a'; > > > This makes the data highly compressible. Because memory.max is set to half of > > control_allocation_size, 512 pages are pushed into zswap. > > > 512 pages of mostly zeros can compress down to roughly 11 to 15 kilobytes > > using compressors like zstd, which is well below the 65536 byte (64k) > > zswap.max limit on a 64k page system. > > > Since the limit might not be reached, writeback might never trigger, > > causing the test to falsely fail. Should the test use incompressible data > > or a lower fixed limit? > > If Sashiko suggests reducing compressibility, we'd need to fill a significant > fraction of each page with varied data, but that would work against the test: > > zswap would reject poorly compressing pages and send them straight to swap, > and memory.stat:zswapped might never reach the threshold the test checks > with cg_read_key_long(..., "zswapped") < 1. > > So, at most I'd keep the data highly compressible and just ensure non-zero, > unique-per-page markers. Sashiko claims that 512 pages will end up consuming 11K to 15K in zswap with this setup, do you know what the actual number is? Especially with different compressors? If it's close to 64K, this might be a problem. Maybe we can fill half of each page with increasing values? It should still be compressible but not too compressible.
On Mon, Mar 23, 2026 at 05:12:27PM -0700, Yosry Ahmed wrote: > On Sun, Mar 22, 2026 at 8:23 PM Li Wang <liwang@redhat.com> wrote: > > > > On Sun, Mar 22, 2026 at 09:18:51AM -0700, Andrew Morton wrote: > > > On Sun, 22 Mar 2026 14:10:31 +0800 Li Wang <liwang@redhat.com> wrote: > > > > > > > This patchset aims to fix various spurious failures and improve the overall > > > > robustness of the cgroup zswap selftests. > > > > > > AI review has questions: > > > https://sashiko.dev/#/patchset/20260322061038.156146-1-liwang@redhat.com > > > > > [Sashiko comments in patch 4/7] > > > ... > > > Could we update this loop, along with the identical loops in > > > alloc_anon_noexit() and alloc_anon_50M_check_swap() shown below, to use > > > sysconf(_SC_PAGESIZE) instead? > > > > I found that Waiman submit another patch that do same thing like this > > suggestion. I'd consider to merge that one into my patch 4/7. > > > > So, let me talk to Waiman first. > > Probably fits better in your patch. > > > > The test data is generated by writing a single 'a' character per page, leaving > > > the rest zero-filled: > > > > > for (int i = 0; i < control_allocation_size; i += pagesize) > > > control_allocation[i] = 'a'; > > > > > This makes the data highly compressible. Because memory.max is set to half of > > > control_allocation_size, 512 pages are pushed into zswap. > > > > > 512 pages of mostly zeros can compress down to roughly 11 to 15 kilobytes > > > using compressors like zstd, which is well below the 65536 byte (64k) > > > zswap.max limit on a 64k page system. > > > > > Since the limit might not be reached, writeback might never trigger, > > > causing the test to falsely fail. Should the test use incompressible data > > > or a lower fixed limit? > > > > If Sashiko suggests reducing compressibility, we'd need to fill a significant > > fraction of each page with varied data, but that would work against the test: > > > > zswap would reject poorly compressing pages and send them straight to swap, > > and memory.stat:zswapped might never reach the threshold the test checks > > with cg_read_key_long(..., "zswapped") < 1. > > > > So, at most I'd keep the data highly compressible and just ensure non-zero, > > unique-per-page markers. > > Sashiko claims that 512 pages will end up consuming 11K to 15K in > zswap with this setup, do you know what the actual number is? Not very sure, I guess each 64K page contains 1 byte of 'a' and 65535 bytes of zero. A single page like that compresses down to roughly 20–30 bytes (a short literal plus a very long zero run, plus frame/header overhead). So the estimate is roughly 512 × 25 bytes ≈ 12.8 KB, which is where the "11 to 15 kilobytes" ballpark comes from. > Especially with different compressors? If it's close to 64K, this > might be a problem. Yes, good point, when I swith to use 'zstd' compressor, it doesn't work. > Maybe we can fill half of each page with increasing values? It should > still be compressible but not too compressible. I tried, this method works on Lzo algorithm but not Zstd. Anyway, I am still investigating. -- Regards, Li Wang
> > Sashiko claims that 512 pages will end up consuming 11K to 15K in > > zswap with this setup, do you know what the actual number is? > > Not very sure, I guess each 64K page contains 1 byte of 'a' and 65535 bytes > of zero. A single page like that compresses down to roughly 20–30 bytes > (a short literal plus a very long zero run, plus frame/header overhead). > So the estimate is roughly 512 × 25 bytes ≈ 12.8 KB, which is where the > "11 to 15 kilobytes" ballpark comes from. > > > Especially with different compressors? If it's close to 64K, this > > might be a problem. > > Yes, good point, when I swith to use 'zstd' compressor, it doesn't work. > > > Maybe we can fill half of each page with increasing values? It should > > still be compressible but not too compressible. > > I tried, this method works on Lzo algorithm but not Zstd. > Anyway, I am still investigating. Do you mean the compressibility is still very high on zstd? I vaguely remember filling a page with repeating patterns (e.g. alphabet letters) seemed to produce a decent compression ratio, but I don't remember the specifics. I am pretty sure an LLM could figure out what values will work for different compression algorithms :)
On Tue, Mar 24, 2026 at 01:28:12PM -0700, Yosry Ahmed wrote:
> > > Sashiko claims that 512 pages will end up consuming 11K to 15K in
> > > zswap with this setup, do you know what the actual number is?
> >
> > Not very sure, I guess each 64K page contains 1 byte of 'a' and 65535 bytes
> > of zero. A single page like that compresses down to roughly 20–30 bytes
> > (a short literal plus a very long zero run, plus frame/header overhead).
> > So the estimate is roughly 512 × 25 bytes ≈ 12.8 KB, which is where the
> > "11 to 15 kilobytes" ballpark comes from.
> >
> > > Especially with different compressors? If it's close to 64K, this
> > > might be a problem.
> >
> > Yes, good point, when I swith to use 'zstd' compressor, it doesn't work.
> >
> > > Maybe we can fill half of each page with increasing values? It should
> > > still be compressible but not too compressible.
> >
> > I tried, this method works on Lzo algorithm but not Zstd.
> > Anyway, I am still investigating.
>
> Do you mean the compressibility is still very high on zstd? I vaguely
> remember filling a page with repeating patterns (e.g. alphabet
> letters) seemed to produce a decent compression ratio, but I don't
> remember the specifics.
>
> I am pretty sure an LLM could figure out what values will work for
> different compression algorithms :)
Well, I have tried many ways for ditry each page of the total, but none
works on zstd compreseor.
e.g,.
--- a/tools/testing/selftests/cgroup/test_zswap.c
+++ b/tools/testing/selftests/cgroup/test_zswap.c
@@ -9,6 +9,7 @@
#include <string.h>
#include <sys/wait.h>
#include <sys/mman.h>
+#include <sys/random.h>
#include "kselftest.h"
#include "cgroup_util.h"
@@ -473,8 +474,12 @@ static int test_no_invasive_cgroup_shrink(const char *root)
if (cg_enter_current(control_group))
goto out;
control_allocation = malloc(control_allocation_size);
- for (int i = 0; i < control_allocation_size; i += page_size)
- control_allocation[i] = (char)((i / page_size) + 1);
+ unsigned int nr_pages = control_allocation_size/page_size;
+ for (int i = 0; i < nr_pages; i++) {
+ unsigned long off = (unsigned long)i * page_size;
+ memset(&control_allocation[off], 0, page_size);
+ getrandom(&control_allocation[off], nr_pages/2, 0);
+ }
if (cg_read_key_long(control_group, "memory.stat", "zswapped") < 1)
goto out;
Even I tried to set random data for all of the pages, they still not
works (zstd). But it can be worked on lzo compressor, I don't know if
the zstd have any addtional configures or I missed anything there.
My current thought is just to satisfy the lzo (default) compressor in
this patch series, and leave the zstd for additional work.
What do you think? any better idea?
--
Regards,
Li Wang
On Tue, Mar 24, 2026 at 7:26 PM Li Wang <liwang@redhat.com> wrote:
>
> On Tue, Mar 24, 2026 at 01:28:12PM -0700, Yosry Ahmed wrote:
> > > > Sashiko claims that 512 pages will end up consuming 11K to 15K in
> > > > zswap with this setup, do you know what the actual number is?
> > >
> > > Not very sure, I guess each 64K page contains 1 byte of 'a' and 65535 bytes
> > > of zero. A single page like that compresses down to roughly 20–30 bytes
> > > (a short literal plus a very long zero run, plus frame/header overhead).
> > > So the estimate is roughly 512 × 25 bytes ≈ 12.8 KB, which is where the
> > > "11 to 15 kilobytes" ballpark comes from.
> > >
> > > > Especially with different compressors? If it's close to 64K, this
> > > > might be a problem.
> > >
> > > Yes, good point, when I swith to use 'zstd' compressor, it doesn't work.
> > >
> > > > Maybe we can fill half of each page with increasing values? It should
> > > > still be compressible but not too compressible.
> > >
> > > I tried, this method works on Lzo algorithm but not Zstd.
> > > Anyway, I am still investigating.
> >
> > Do you mean the compressibility is still very high on zstd? I vaguely
> > remember filling a page with repeating patterns (e.g. alphabet
> > letters) seemed to produce a decent compression ratio, but I don't
> > remember the specifics.
> >
> > I am pretty sure an LLM could figure out what values will work for
> > different compression algorithms :)
>
> Well, I have tried many ways for ditry each page of the total, but none
> works on zstd compreseor.
>
> e.g,.
>
> --- a/tools/testing/selftests/cgroup/test_zswap.c
> +++ b/tools/testing/selftests/cgroup/test_zswap.c
> @@ -9,6 +9,7 @@
> #include <string.h>
> #include <sys/wait.h>
> #include <sys/mman.h>
> +#include <sys/random.h>
>
> #include "kselftest.h"
> #include "cgroup_util.h"
> @@ -473,8 +474,12 @@ static int test_no_invasive_cgroup_shrink(const char *root)
> if (cg_enter_current(control_group))
> goto out;
> control_allocation = malloc(control_allocation_size);
> - for (int i = 0; i < control_allocation_size; i += page_size)
> - control_allocation[i] = (char)((i / page_size) + 1);
> + unsigned int nr_pages = control_allocation_size/page_size;
> + for (int i = 0; i < nr_pages; i++) {
> + unsigned long off = (unsigned long)i * page_size;
> + memset(&control_allocation[off], 0, page_size);
> + getrandom(&control_allocation[off], nr_pages/2, 0);
This should be page_size/2, right?
nr_pages is 1024 IIUC, so that's 512 bytes only. If the page size is
64K, we're leaving 63.5K (99% of the page) as zeroes.
> + }
> if (cg_read_key_long(control_group, "memory.stat", "zswapped") < 1)
> goto out;
>
> Even I tried to set random data for all of the pages, they still not
> works (zstd). But it can be worked on lzo compressor, I don't know if
> the zstd have any addtional configures or I missed anything there.
>
> My current thought is just to satisfy the lzo (default) compressor in
> this patch series, and leave the zstd for additional work.
>
> What do you think? any better idea?
Let's check if using page_size/2 fixes it first. If a page is 100%
filled with random data it should be incompressible, so I would be
surprised if 50% random data yields a very high compression ratio.
It would also help if you check what the compression ratio actually is
(i.e. compressed_size / uncompressed_size).
On Tue, Mar 24, 2026 at 07:49:17PM -0700, Yosry Ahmed wrote:
> On Tue, Mar 24, 2026 at 7:26 PM Li Wang <liwang@redhat.com> wrote:
> >
> > On Tue, Mar 24, 2026 at 01:28:12PM -0700, Yosry Ahmed wrote:
> > > > > Sashiko claims that 512 pages will end up consuming 11K to 15K in
> > > > > zswap with this setup, do you know what the actual number is?
> > > >
> > > > Not very sure, I guess each 64K page contains 1 byte of 'a' and 65535 bytes
> > > > of zero. A single page like that compresses down to roughly 20–30 bytes
> > > > (a short literal plus a very long zero run, plus frame/header overhead).
> > > > So the estimate is roughly 512 × 25 bytes ≈ 12.8 KB, which is where the
> > > > "11 to 15 kilobytes" ballpark comes from.
> > > >
> > > > > Especially with different compressors? If it's close to 64K, this
> > > > > might be a problem.
> > > >
> > > > Yes, good point, when I swith to use 'zstd' compressor, it doesn't work.
> > > >
> > > > > Maybe we can fill half of each page with increasing values? It should
> > > > > still be compressible but not too compressible.
> > > >
> > > > I tried, this method works on Lzo algorithm but not Zstd.
> > > > Anyway, I am still investigating.
> > >
> > > Do you mean the compressibility is still very high on zstd? I vaguely
> > > remember filling a page with repeating patterns (e.g. alphabet
> > > letters) seemed to produce a decent compression ratio, but I don't
> > > remember the specifics.
> > >
> > > I am pretty sure an LLM could figure out what values will work for
> > > different compression algorithms :)
> >
> > Well, I have tried many ways for ditry each page of the total, but none
> > works on zstd compreseor.
> >
> > e.g,.
> >
> > --- a/tools/testing/selftests/cgroup/test_zswap.c
> > +++ b/tools/testing/selftests/cgroup/test_zswap.c
> > @@ -9,6 +9,7 @@
> > #include <string.h>
> > #include <sys/wait.h>
> > #include <sys/mman.h>
> > +#include <sys/random.h>
> >
> > #include "kselftest.h"
> > #include "cgroup_util.h"
> > @@ -473,8 +474,12 @@ static int test_no_invasive_cgroup_shrink(const char *root)
> > if (cg_enter_current(control_group))
> > goto out;
> > control_allocation = malloc(control_allocation_size);
> > - for (int i = 0; i < control_allocation_size; i += page_size)
> > - control_allocation[i] = (char)((i / page_size) + 1);
> > + unsigned int nr_pages = control_allocation_size/page_size;
> > + for (int i = 0; i < nr_pages; i++) {
> > + unsigned long off = (unsigned long)i * page_size;
> > + memset(&control_allocation[off], 0, page_size);
> > + getrandom(&control_allocation[off], nr_pages/2, 0);
>
> This should be page_size/2, right?
Ah, that's right.
> nr_pages is 1024 IIUC, so that's 512 bytes only. If the page size is
> 64K, we're leaving 63.5K (99% of the page) as zeroes.
nr_pages is 512, but you're right on the analysis.
> > + }
> > if (cg_read_key_long(control_group, "memory.stat", "zswapped") < 1)
> > goto out;
> >
> > Even I tried to set random data for all of the pages, they still not
> > works (zstd). But it can be worked on lzo compressor, I don't know if
> > the zstd have any addtional configures or I missed anything there.
> >
> > My current thought is just to satisfy the lzo (default) compressor in
> > this patch series, and leave the zstd for additional work.
> >
> > What do you think? any better idea?
>
> Let's check if using page_size/2 fixes it first. If a page is 100%
> filled with random data it should be incompressible, so I would be
> surprised if 50% random data yields a very high compression ratio.
>
> It would also help if you check what the compression ratio actually is
> (i.e. compressed_size / uncompressed_size).
Randomly dirty the page_size/2 always led to OOM, so I swith to page_size/4,
it looks like both algorithms compress pages successfully, but zstd doesn't
update 'zswpwb' stat so that test result fails.
Considering that zswap writeback is asynchronous, I additianlly introduced
a polling method for checking 500 times, but 'zswpwb' still returned zero.
==== Test results ====
lzo:
# uncompressed: 51511296, compressed: 13353876, ratio: 0.26
# get_cg_wb_count(wb_group) is 206, get_cg_wb_count(control_group) is 0
ok 7 test_no_invasive_cgroup_shrink
zstd:
# uncompressed: 48037888, compressed: 12019013, ratio: 0.25
# get_cg_wb_count(wb_group) is 0, get_cg_wb_count(control_group) is 0
not ok 7 test_no_invasive_cgroup_shrink
==== debug code for the above output ====
long zswapped = cg_read_key_long(control_group, "memory.stat", "zswapped");
long zswap_compressed = cg_read_key_long(control_group, "memory.stat", "zswap");
ksft_print_msg("uncompressed: %ld, compressed: %ld, ratio: %.2f\n",
zswapped, zswap_compressed,
(double)zswap_compressed / zswapped);
ksft_print_msg("get_cg_wb_count(wb_group) is %zu, get_cg_wb_count(control_group) is %zu\n",
get_cg_wb_count(wb_group),
get_cg_wb_count(control_group));
In summary, the problem is that 'zswpwb' does not update when zswap is executed
under the zstd algorithm. I'd debugging this issue separately from the kernel side.
--
Regards,
Li Wang
> In summary, the problem is that 'zswpwb' does not update when zswap is executed > under the zstd algorithm. I'd debugging this issue separately from the kernel side. I forgot to mention, this issue only observed on systems with a 64K pagesize (ppc64le, aarch64). I changed the aarch64 page size to 4K, and it passed the test every time. -- Regards, Li Wang
On Wed, Mar 25, 2026 at 02:17:42PM +0800, Li Wang wrote: > > In summary, the problem is that 'zswpwb' does not update when zswap is executed > > under the zstd algorithm. I'd debugging this issue separately from the kernel side. Plz ignore above test logs and conclusion. > I forgot to mention, this issue only observed on systems with a 64K > pagesize (ppc64le, aarch64). I changed the aarch64 page size to 4K, > and it passed the test every time. Well, finally, I think I've found the root cause of the failure of test_no_invasive_cgroup_shrink. The test sets up two cgroups: wb_group, which is expected to trigger zswap writeback, control_group, which should have pages in zswap but must not experience any writeback. However, the data patterns used for each group are reversed: wb_group uses allocate_bytes(), which only writes a single byte per page (mem[i] = 'a'). The rest of each page is effectively zero. This data is trivially compressible, especially by zstd, so the compressed pages easily fit within zswap.max and writeback is never triggered. control_group, on the other hand, uses getrandom() to dirty 1/4 of each page, producing data that is much harder to compress. Ironically, this is the group that does not need to trigger writeback. So the test has the hard-to-compress data in the wrong cgroup. The fix is to swap the allocation patterns: wb_group should use the partially random data to ensure its compressed pages exceed zswap.max and trigger writeback, while control_group only needs simple, easily compressible data to occupy zswap. I have confirmed this when I reverse the two partens and get all passed on both lzo and zstd. Will fix in next patch version. -- Regards, Li Wang
© 2016 - 2026 Red Hat, Inc.