Documentation/admin-guide/blockdev/zram.rst | 18 +++-- Documentation/admin-guide/cgroup-v2.rst | 7 ++ drivers/block/zram/zram_drv.c | 74 ++++++++++++++++++--- drivers/block/zram/zram_drv.h | 2 + include/linux/memcontrol.h | 19 ++++++ mm/memcontrol.c | 31 +++++++++ 6 files changed, 139 insertions(+), 12 deletions(-)
Hello everyone, On Android, different applications have varying tolerance for decompression latency. Applications with higher tolerance for decompression latency are better suited for algorithms like ZSTD, which provides high compression ratio but slower decompression speed. Conversely, applications with lower tolerance for decompression latency can use algorithms like LZ4 or LZO that offer faster decompression but lower compression ratios. For example, lightweight applications (with few anonymous pages) or applications without foreground UI typically have higher tolerance for decompression latency. Similarly, in memory allocation slow paths or under high CPU pressure, using algorithms with faster compression speeds might be more appropriate. This patch introduces a per-cgroup compression priority mechanism, where different compression priorities map to different algorithms. This allows administrators to select appropriate compression algorithms on a per-cgroup basis. Currently, this patch is experimental and we would greatly appreciate community feedback. I'm uncertain whether obtaining compression priority via get_cgroup_comp_priority in zram is the best approach. While this implementation is convenient, it seems somewhat unusual. Perhaps the next step should be to pass compression priority through page->private. jinji zhong (3): mm/memcontrol: Introduce per-cgroup compression priority zram: Zram supports per-cgroup compression priority Doc: Update documentation for per-cgroup compression priority Documentation/admin-guide/blockdev/zram.rst | 18 +++-- Documentation/admin-guide/cgroup-v2.rst | 7 ++ drivers/block/zram/zram_drv.c | 74 ++++++++++++++++++--- drivers/block/zram/zram_drv.h | 2 + include/linux/memcontrol.h | 19 ++++++ mm/memcontrol.c | 31 +++++++++ 6 files changed, 139 insertions(+), 12 deletions(-) -- 2.48.1
On Sat, Oct 25, 2025 at 6:53 PM jinji zhong <jinji.z.zhong@gmail.com> wrote: > > Hello everyone, > > On Android, different applications have varying tolerance for > decompression latency. Applications with higher tolerance for > decompression latency are better suited for algorithms like ZSTD, > which provides high compression ratio but slower decompression > speed. Conversely, applications with lower tolerance for > decompression latency can use algorithms like LZ4 or LZO that > offer faster decompression but lower compression ratios. For example, > lightweight applications (with few anonymous pages) or applications > without foreground UI typically have higher tolerance for decompression > latency. > > Similarly, in memory allocation slow paths or under high CPU > pressure, using algorithms with faster compression speeds might > be more appropriate. > > This patch introduces a per-cgroup compression priority mechanism, > where different compression priorities map to different algorithms. > This allows administrators to select appropriate compression > algorithms on a per-cgroup basis. > > Currently, this patch is experimental and we would greatly > appreciate community feedback. I'm uncertain whether obtaining > compression priority via get_cgroup_comp_priority in zram is the > best approach. While this implementation is convenient, it seems > somewhat unusual. Perhaps the next step should be to pass > compression priority through page->private. I agree with TJ's and Shakeel's take on this. You (or some other zram/zswap users) will have to present a more compelling case for the necessity of a hierarchical structure for this property :) The semantics itself is unclear to me - what's the default? How should inheritance be defined? What happens when cgroups are killed etc? As a side note, seems like there is a proposal for swap device priority (+ Youngjun) https://lore.kernel.org/all/20250716202006.3640584-1-youngjun.park@lge.com/ Is this something you can leverage? Another alternative is to make this zram-internal, i.e add knobs to zram sysfs, or extend the recomp parameter. I'll defer to zram maintainers and users to comment on this :)
On (25/10/27 15:46), Nhat Pham wrote: > Another alternative is to make this zram-internal, i.e add knobs to > zram sysfs, or extend the recomp parameter. I'll defer to zram > maintainers and users to comment on this :) I think this cannot be purely zram-internal, we'd need some "hint" from upper layers which process/cgroup each particular page belongs to and what's its priority.
Hi Jinji, On Sun, Oct 26, 2025 at 01:05:07AM +0000, jinji zhong wrote: > Hello everyone, > > On Android, different applications have varying tolerance for > decompression latency. Applications with higher tolerance for > decompression latency are better suited for algorithms like ZSTD, > which provides high compression ratio but slower decompression > speed. Conversely, applications with lower tolerance for > decompression latency can use algorithms like LZ4 or LZO that > offer faster decompression but lower compression ratios. For example, > lightweight applications (with few anonymous pages) or applications > without foreground UI typically have higher tolerance for decompression > latency. > > Similarly, in memory allocation slow paths or under high CPU > pressure, using algorithms with faster compression speeds might > be more appropriate. > > This patch introduces a per-cgroup compression priority mechanism, > where different compression priorities map to different algorithms. > This allows administrators to select appropriate compression > algorithms on a per-cgroup basis. > > Currently, this patch is experimental and we would greatly > appreciate community feedback. I'm uncertain whether obtaining > compression priority via get_cgroup_comp_priority in zram is the > best approach. While this implementation is convenient, it seems > somewhat unusual. Perhaps the next step should be to pass > compression priority through page->private. > Setting aside the issues in the implementation (like changing compression algorithm of a cgroup while it already has some memory compressed using older algo), I don't think memcg interface is the right way to go about it. We usually add interfaces to memcg that have hierarchical semantics. Anyways if you want to have this feature, I think BPF might be the way to get this flexibility without introducing any stable API and then you can experiment and evaluate if this really helps.
> Hi Jinji, > > On Sun, Oct 26, 2025 at 01:05:07AM +0000, jinji zhong wrote: > > Hello everyone, > > > > On Android, different applications have varying tolerance for > > decompression latency. Applications with higher tolerance for > > decompression latency are better suited for algorithms like ZSTD, > > which provides high compression ratio but slower decompression > > speed. Conversely, applications with lower tolerance for > > decompression latency can use algorithms like LZ4 or LZO that > > offer faster decompression but lower compression ratios. For example, > > lightweight applications (with few anonymous pages) or applications > > without foreground UI typically have higher tolerance for decompression > > latency. > > > > Similarly, in memory allocation slow paths or under high CPU > > pressure, using algorithms with faster compression speeds might > > be more appropriate. > > > > This patch introduces a per-cgroup compression priority mechanism, > > where different compression priorities map to different algorithms. > > This allows administrators to select appropriate compression > > algorithms on a per-cgroup basis. > > > > Currently, this patch is experimental and we would greatly > > appreciate community feedback. I'm uncertain whether obtaining > > compression priority via get_cgroup_comp_priority in zram is the > > best approach. While this implementation is convenient, it seems > > somewhat unusual. Perhaps the next step should be to pass > > compression priority through page->private. > > > > Setting aside the issues in the implementation (like changing > compression algorithm of a cgroup while it already has some memory Zram uses flags to track the compression priority of each page, which should be ok when the page is decompressed. > compressed using older algo), I don't think memcg interface is the right > way to go about it. We usually add interfaces to memcg that have > hierarchical semantics. Thanks a lot, Shakeel. I got it. > Anyways if you want to have this feature, I think BPF might be the way > to get this flexibility without introducing any stable API and then you > can experiment and evaluate if this really helps.
Hello, On Sun, Oct 26, 2025 at 01:05:07AM +0000, jinji zhong wrote: > This patch introduces a per-cgroup compression priority mechanism, > where different compression priorities map to different algorithms. > This allows administrators to select appropriate compression > algorithms on a per-cgroup basis. I don't think it makes sense to tie this to cgroups. Is there something preventing this from following the process hierarchy? Thanks. -- tejun
> Hello, > > On Sun, Oct 26, 2025 at 01:05:07AM +0000, jinji zhong wrote: > > This patch introduces a per-cgroup compression priority mechanism, > > where different compression priorities map to different algorithms. > > This allows administrators to select appropriate compression > > algorithms on a per-cgroup basis. > > I don't think it makes sense to tie this to cgroups. Is there something > preventing this from following the process hierarchy? > Thanks. Hello, Tejun, There is also a layer of page tables between the process and the page, so making it follow the process hierarchy would be complicated. But you make a good point; it may indeed be unnecessary to introduce a separate per-cgroup compression priority. As Nhat suggested, we could try reusing the per-cgroup swap priority. > -- > tejun
© 2016 - 2026 Red Hat, Inc.