drivers/block/zram/zram_drv.c | 8 +++----- 1 file changed, 3 insertions(+), 5 deletions(-)
Parallel concurrent writes to the same zram index result in
leaked zsmalloc handles. Schematically we can have something
like this:
CPU0 CPU1
zram_slot_lock()
zs_free(handle)
zram_slot_lock()
zram_slot_lock()
zs_free(handle)
zram_slot_lock()
compress compress
handle = zs_malloc() handle = zs_malloc()
zram_slot_lock
zram_set_handle(handle)
zram_slot_lock
zram_slot_lock
zram_set_handle(handle)
zram_slot_lock
Either CPU0 or CPU1 zsmalloc handle will leak because zs_free()
is done too early. In fact, we need to reset zram entry right
before we set its new handle, all under the same slot lock scope.
Cc: stable@vger.kernel.org
Reported-by: Changhui Zhong <czhong@redhat.com>
Closes: https://lore.kernel.org/all/CAGVVp+UtpGoW5WEdEU7uVTtsSCjPN=ksN6EcvyypAtFDOUf30A@mail.gmail.com/
Fixes: 71268035f5d73 ("zram: free slot memory early during write")
Signed-off-by: Sergey Senozhatsky <senozhatsky@chromium.org>
---
drivers/block/zram/zram_drv.c | 8 +++-----
1 file changed, 3 insertions(+), 5 deletions(-)
diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
index 9ac271b82780..78b56cd7698e 100644
--- a/drivers/block/zram/zram_drv.c
+++ b/drivers/block/zram/zram_drv.c
@@ -1788,6 +1788,7 @@ static int write_same_filled_page(struct zram *zram, unsigned long fill,
u32 index)
{
zram_slot_lock(zram, index);
+ zram_free_page(zram, index);
zram_set_flag(zram, index, ZRAM_SAME);
zram_set_handle(zram, index, fill);
zram_slot_unlock(zram, index);
@@ -1825,6 +1826,7 @@ static int write_incompressible_page(struct zram *zram, struct page *page,
kunmap_local(src);
zram_slot_lock(zram, index);
+ zram_free_page(zram, index);
zram_set_flag(zram, index, ZRAM_HUGE);
zram_set_handle(zram, index, handle);
zram_set_obj_size(zram, index, PAGE_SIZE);
@@ -1848,11 +1850,6 @@ static int zram_write_page(struct zram *zram, struct page *page, u32 index)
unsigned long element;
bool same_filled;
- /* First, free memory allocated to this slot (if any) */
- zram_slot_lock(zram, index);
- zram_free_page(zram, index);
- zram_slot_unlock(zram, index);
-
mem = kmap_local_page(page);
same_filled = page_same_filled(mem, &element);
kunmap_local(mem);
@@ -1894,6 +1891,7 @@ static int zram_write_page(struct zram *zram, struct page *page, u32 index)
zcomp_stream_put(zstrm);
zram_slot_lock(zram, index);
+ zram_free_page(zram, index);
zram_set_handle(zram, index, handle);
zram_set_obj_size(zram, index, comp_len);
zram_slot_unlock(zram, index);
--
2.51.0.384.g4c02a37b29-goog
On Tue, Sep 9, 2025 at 12:52 PM Sergey Senozhatsky
<senozhatsky@chromium.org> wrote:
>
> Parallel concurrent writes to the same zram index result in
> leaked zsmalloc handles. Schematically we can have something
> like this:
>
> CPU0 CPU1
> zram_slot_lock()
> zs_free(handle)
> zram_slot_lock()
> zram_slot_lock()
> zs_free(handle)
> zram_slot_lock()
>
> compress compress
> handle = zs_malloc() handle = zs_malloc()
> zram_slot_lock
> zram_set_handle(handle)
> zram_slot_lock
> zram_slot_lock
> zram_set_handle(handle)
> zram_slot_lock
>
> Either CPU0 or CPU1 zsmalloc handle will leak because zs_free()
> is done too early. In fact, we need to reset zram entry right
> before we set its new handle, all under the same slot lock scope.
>
> Cc: stable@vger.kernel.org
> Reported-by: Changhui Zhong <czhong@redhat.com>
> Closes: https://lore.kernel.org/all/CAGVVp+UtpGoW5WEdEU7uVTtsSCjPN=ksN6EcvyypAtFDOUf30A@mail.gmail.com/
> Fixes: 71268035f5d73 ("zram: free slot memory early during write")
> Signed-off-by: Sergey Senozhatsky <senozhatsky@chromium.org>
> ---
> drivers/block/zram/zram_drv.c | 8 +++-----
> 1 file changed, 3 insertions(+), 5 deletions(-)
>
> diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
> index 9ac271b82780..78b56cd7698e 100644
> --- a/drivers/block/zram/zram_drv.c
> +++ b/drivers/block/zram/zram_drv.c
> @@ -1788,6 +1788,7 @@ static int write_same_filled_page(struct zram *zram, unsigned long fill,
> u32 index)
> {
> zram_slot_lock(zram, index);
> + zram_free_page(zram, index);
> zram_set_flag(zram, index, ZRAM_SAME);
> zram_set_handle(zram, index, fill);
> zram_slot_unlock(zram, index);
> @@ -1825,6 +1826,7 @@ static int write_incompressible_page(struct zram *zram, struct page *page,
> kunmap_local(src);
>
> zram_slot_lock(zram, index);
> + zram_free_page(zram, index);
> zram_set_flag(zram, index, ZRAM_HUGE);
> zram_set_handle(zram, index, handle);
> zram_set_obj_size(zram, index, PAGE_SIZE);
> @@ -1848,11 +1850,6 @@ static int zram_write_page(struct zram *zram, struct page *page, u32 index)
> unsigned long element;
> bool same_filled;
>
> - /* First, free memory allocated to this slot (if any) */
> - zram_slot_lock(zram, index);
> - zram_free_page(zram, index);
> - zram_slot_unlock(zram, index);
> -
> mem = kmap_local_page(page);
> same_filled = page_same_filled(mem, &element);
> kunmap_local(mem);
> @@ -1894,6 +1891,7 @@ static int zram_write_page(struct zram *zram, struct page *page, u32 index)
> zcomp_stream_put(zstrm);
>
> zram_slot_lock(zram, index);
> + zram_free_page(zram, index);
> zram_set_handle(zram, index, handle);
> zram_set_obj_size(zram, index, comp_len);
> zram_slot_unlock(zram, index);
> --
> 2.51.0.384.g4c02a37b29-goog
>
Hi, Sergey
Thanks for the patch, I re-ran my test with your patch and confirmed
that your patch fixed this issue.
Tested-by: Changhui Zhong <czhong@redhat.com>
Thanks,
© 2016 - 2026 Red Hat, Inc.