From nobody Fri Dec 19 21:54:40 2025 Received: from mail-pj1-f42.google.com (mail-pj1-f42.google.com [209.85.216.42]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1206C1BD9CB for ; Wed, 22 Jan 2025 05:59:15 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.216.42 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737525561; cv=none; b=Ds6+07Ry/x//U7jXru1ytCWjMnr1eGuL1RFoBlEJhDnIEL2grYagAkaIZ4FBO6sHZfK2w0tFGwgcN/P6NJFD/O8bCcgtCgmJtNSyFAFf4lbZSm71tp5ypllpkAC4HBQAKOtvPFxb7qLHZNdIyaZKd0iyBoyccnW2PIY9mlJ1EVk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1737525561; c=relaxed/simple; bh=3/AfWZnFPiTI9zzAIAkCDU1eEzFmJAb7Ud3Y5m0wgT8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=If1YjKnDIP8aqryUUnGl4EDk+Ucm8X5pXOtT53AJVvEPp7huP2mhOPTZJ74fWSVK4iVkWrXAN9oSiGdx9k0NN9kyzJDLBll+Wr51JLINac4BtjIdrxnxY68mil/GHr0fHIraQB4J+m5F9yrve3fAyMjPNCW2/SjFSaXHEKbWBis= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=chromium.org; spf=pass smtp.mailfrom=chromium.org; dkim=pass (1024-bit key) header.d=chromium.org header.i=@chromium.org header.b=ElxmXfcR; arc=none smtp.client-ip=209.85.216.42 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=chromium.org Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=chromium.org Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=chromium.org header.i=@chromium.org header.b="ElxmXfcR" Received: by mail-pj1-f42.google.com with SMTP id 98e67ed59e1d1-2ee67e9287fso11182655a91.0 for ; Tue, 21 Jan 2025 21:59:15 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=chromium.org; s=google; t=1737525555; x=1738130355; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=nNJBFEkJ6QI9GsDPc+DtKnpVKRZ1iP1KV3SxEj7y/i4=; b=ElxmXfcRuSsS3jvGiG9gY2fcTzymaPGsBxSvT5bGCkStQt26TSZ45faVWR/qS+idCC SBObeU6g39/LeALHHp2UPdgAT4J3LU3ZBlQlwY9JpJW7Oj4ZUP0DbPw/UW0v5OL88XdB 31DIXWcC7SeyF+MDLzJbI5LJd0BsylJgv5sA4= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1737525555; x=1738130355; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=nNJBFEkJ6QI9GsDPc+DtKnpVKRZ1iP1KV3SxEj7y/i4=; b=bcwXKCdlsIZ9wpvqGyGsLi+bYlTIKT0pC4knHy6WkJDXQU8SKzpGOfZ6Rr5CzV+0z/ Iy69R+29xiWlVSCoJ8r8WDdBUwta5R3omGa8zXYfqpdE+WnkrKkddBKpkRET2kohBLqW nnk58eTWkGkdV6CWmRpNSFdKDOcQNyfdJP+g+AmhGkd+RS2D/1l1g1fb2blLccUn9YNe G+pIlew3CoR7ZG9dfQu2YXv+paW8HUqYibG4C5L2Knppm6V0/d5tgCIPV+ggIMSeyZLN 2uGFMdoFawmaHS70kOyFNqEz2cFqLJ01AWIEFUE8DQPRfWRpAs8RNjNj5gdpmAoseEif leTA== X-Forwarded-Encrypted: i=1; AJvYcCWsET/R40Whep/4BJSIJa0mhHmJ9SsWtdHuMf+wMe+7qjy6CmLARsoIZug9Ksu5dD2QQ+8oSHjI3/wV0WU=@vger.kernel.org X-Gm-Message-State: AOJu0YwNcg1Zv2Y9L/u6jKE3eVPY1w9k79j+ng46mailblgVncR849rS /Fsln9ycgjpqalcWzJTs9vKHcJZc4Pumzxrx+k4JtGIr3BeOnQHgf1VTCO/ybg== X-Gm-Gg: ASbGncvQx8q3OhbjKLerJBWtRdmnuh8Om6ygrW1eBXOO43yTbzrRkV5IyGiM6DhkQEY M890Z+cSZP1rCvvx9JJ3p3b5Xxj3tdzH7gqAKEe0jD+RoGzjlicKM7MHrRhvh0X3YpElmOXhf4k 9EwuDKlURbhDQRsRNzuCUgZVvvaMbieVQ3aT39aD6Q2o6HMuHC7p1QgMw8e0+ZpF4YdHet5NHe8 laODg/U4KThu8FIJEUWE8l2VF7Y1BnCUty8fyQhdAkSztWcVHzp0NvQUitTHhcnHHtmM+DR X-Google-Smtp-Source: AGHT+IFkiDY8ubThsiSVPvY+v+ZW4p7rcixmuj2cYd+76LLUa3zPITbS5RuEp5qCovKlriMjDRnvpw== X-Received: by 2002:a17:90b:540b:b0:2f2:8bdd:cd8b with SMTP id 98e67ed59e1d1-2f782d972a2mr28408515a91.29.1737525555228; Tue, 21 Jan 2025 21:59:15 -0800 (PST) Received: from localhost ([2401:fa00:8f:203:2902:8f0f:12b3:c251]) by smtp.gmail.com with UTF8SMTPSA id 98e67ed59e1d1-2f7e6a78ba1sm618280a91.14.2025.01.21.21.59.13 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128); Tue, 21 Jan 2025 21:59:14 -0800 (PST) From: Sergey Senozhatsky To: Andrew Morton , Minchan Kim Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Sergey Senozhatsky Subject: [PATCH 2/7] zram: do not use per-CPU compression streams Date: Wed, 22 Jan 2025 14:57:40 +0900 Message-ID: <20250122055831.3341175-3-senozhatsky@chromium.org> X-Mailer: git-send-email 2.48.0.rc2.279.g1de40edade-goog In-Reply-To: <20250122055831.3341175-1-senozhatsky@chromium.org> References: <20250122055831.3341175-1-senozhatsky@chromium.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Similarly to per-entry spin-lock per-CPU compression streams also have a number of shortcoming. First, per-CPU stream access has to be done from a non-preemptible (atomic) section, which imposes the same atomicity requirements on compression backends as entry spin-lock do and makes it impossible to use algorithms that can schedule/wait/sleep during compression and decompression. Second, per-CPU streams noticeably increase memory usage (actually more like wastage) of secondary compression streams. The problem is that secondary compression streams are allocated per-CPU, just like the primary streams are. Yet we never use more that one secondary stream at a time, because recompression is a single threaded action. Which means that remaining num_online_cpu() - 1 streams are allocated for nothing, and this is per-priority list (we can have several secondary compression algorithms). Depending on the algorithm this may lead to a significant memory wastage, in addition each stream also carries a workmem buffer (2 physical pages). Instead of per-CPU streams, maintain a list of idle compression streams and allocate new streams on-demand (something that we used to do many years ago). So that zram read() and write() become non-atomic and ease requirements on the compression algorithm implementation. This also means that we now should have only one secondary stream per-priority list. Signed-off-by: Sergey Senozhatsky --- drivers/block/zram/zcomp.c | 162 +++++++++++++++++++--------------- drivers/block/zram/zcomp.h | 17 ++-- drivers/block/zram/zram_drv.c | 29 +++--- include/linux/cpuhotplug.h | 1 - 4 files changed, 108 insertions(+), 101 deletions(-) diff --git a/drivers/block/zram/zcomp.c b/drivers/block/zram/zcomp.c index bb514403e305..5d8298fc2616 100644 --- a/drivers/block/zram/zcomp.c +++ b/drivers/block/zram/zcomp.c @@ -43,31 +43,40 @@ static const struct zcomp_ops *backends[] =3D { NULL }; =20 -static void zcomp_strm_free(struct zcomp *comp, struct zcomp_strm *zstrm) +static void zcomp_strm_free(struct zcomp *comp, struct zcomp_strm *strm) { - comp->ops->destroy_ctx(&zstrm->ctx); - vfree(zstrm->buffer); - zstrm->buffer =3D NULL; + comp->ops->destroy_ctx(&strm->ctx); + vfree(strm->buffer); + kfree(strm); } =20 -static int zcomp_strm_init(struct zcomp *comp, struct zcomp_strm *zstrm) +static struct zcomp_strm *zcomp_strm_alloc(struct zcomp *comp) { + struct zcomp_strm *strm; int ret; =20 - ret =3D comp->ops->create_ctx(comp->params, &zstrm->ctx); - if (ret) - return ret; + strm =3D kzalloc(sizeof(*strm), GFP_KERNEL); + if (!strm) + return NULL; + + INIT_LIST_HEAD(&strm->entry); + + ret =3D comp->ops->create_ctx(comp->params, &strm->ctx); + if (ret) { + kfree(strm); + return NULL; + } =20 /* - * allocate 2 pages. 1 for compressed data, plus 1 extra for the - * case when compressed size is larger than the original one + * allocate 2 pages. 1 for compressed data, plus 1 extra in case if + * compressed data is larger than the original one. */ - zstrm->buffer =3D vzalloc(2 * PAGE_SIZE); - if (!zstrm->buffer) { - zcomp_strm_free(comp, zstrm); - return -ENOMEM; + strm->buffer =3D vzalloc(2 * PAGE_SIZE); + if (!strm->buffer) { + zcomp_strm_free(comp, strm); + return NULL; } - return 0; + return strm; } =20 static const struct zcomp_ops *lookup_backend_ops(const char *comp) @@ -109,13 +118,59 @@ ssize_t zcomp_available_show(const char *comp, char *= buf) =20 struct zcomp_strm *zcomp_stream_get(struct zcomp *comp) { - local_lock(&comp->stream->lock); - return this_cpu_ptr(comp->stream); + struct zcomp_strm *strm; + + might_sleep(); + + while (1) { + spin_lock(&comp->strm_lock); + if (!list_empty(&comp->idle_strm)) { + strm =3D list_first_entry(&comp->idle_strm, + struct zcomp_strm, + entry); + list_del(&strm->entry); + spin_unlock(&comp->strm_lock); + return strm; + } + + /* cannot allocate new stream, wait for an idle one */ + if (comp->avail_strm >=3D num_online_cpus()) { + spin_unlock(&comp->strm_lock); + wait_event(comp->strm_wait, + !list_empty(&comp->idle_strm)); + continue; + } + + /* allocate new stream */ + comp->avail_strm++; + spin_unlock(&comp->strm_lock); + + strm =3D zcomp_strm_alloc(comp); + if (strm) + break; + + spin_lock(&comp->strm_lock); + comp->avail_strm--; + spin_unlock(&comp->strm_lock); + wait_event(comp->strm_wait, !list_empty(&comp->idle_strm)); + } + + return strm; } =20 -void zcomp_stream_put(struct zcomp *comp) +void zcomp_stream_put(struct zcomp *comp, struct zcomp_strm *strm) { - local_unlock(&comp->stream->lock); + spin_lock(&comp->strm_lock); + if (comp->avail_strm <=3D num_online_cpus()) { + list_add(&strm->entry, &comp->idle_strm); + spin_unlock(&comp->strm_lock); + wake_up(&comp->strm_wait); + return; + } + + comp->avail_strm--; + spin_unlock(&comp->strm_lock); + zcomp_strm_free(comp, strm); } =20 int zcomp_compress(struct zcomp *comp, struct zcomp_strm *zstrm, @@ -148,61 +203,19 @@ int zcomp_decompress(struct zcomp *comp, struct zcomp= _strm *zstrm, return comp->ops->decompress(comp->params, &zstrm->ctx, &req); } =20 -int zcomp_cpu_up_prepare(unsigned int cpu, struct hlist_node *node) -{ - struct zcomp *comp =3D hlist_entry(node, struct zcomp, node); - struct zcomp_strm *zstrm; - int ret; - - zstrm =3D per_cpu_ptr(comp->stream, cpu); - local_lock_init(&zstrm->lock); - - ret =3D zcomp_strm_init(comp, zstrm); - if (ret) - pr_err("Can't allocate a compression stream\n"); - return ret; -} - -int zcomp_cpu_dead(unsigned int cpu, struct hlist_node *node) -{ - struct zcomp *comp =3D hlist_entry(node, struct zcomp, node); - struct zcomp_strm *zstrm; - - zstrm =3D per_cpu_ptr(comp->stream, cpu); - zcomp_strm_free(comp, zstrm); - return 0; -} - -static int zcomp_init(struct zcomp *comp, struct zcomp_params *params) -{ - int ret; - - comp->stream =3D alloc_percpu(struct zcomp_strm); - if (!comp->stream) - return -ENOMEM; - - comp->params =3D params; - ret =3D comp->ops->setup_params(comp->params); - if (ret) - goto cleanup; - - ret =3D cpuhp_state_add_instance(CPUHP_ZCOMP_PREPARE, &comp->node); - if (ret < 0) - goto cleanup; - - return 0; - -cleanup: - comp->ops->release_params(comp->params); - free_percpu(comp->stream); - return ret; -} - void zcomp_destroy(struct zcomp *comp) { - cpuhp_state_remove_instance(CPUHP_ZCOMP_PREPARE, &comp->node); + struct zcomp_strm *strm; + + while (!list_empty(&comp->idle_strm)) { + strm =3D list_first_entry(&comp->idle_strm, + struct zcomp_strm, + entry); + list_del(&strm->entry); + zcomp_strm_free(comp, strm); + } + comp->ops->release_params(comp->params); - free_percpu(comp->stream); kfree(comp); } =20 @@ -229,7 +242,12 @@ struct zcomp *zcomp_create(const char *alg, struct zco= mp_params *params) return ERR_PTR(-EINVAL); } =20 - error =3D zcomp_init(comp, params); + INIT_LIST_HEAD(&comp->idle_strm); + init_waitqueue_head(&comp->strm_wait); + spin_lock_init(&comp->strm_lock); + + comp->params =3D params; + error =3D comp->ops->setup_params(comp->params); if (error) { kfree(comp); return ERR_PTR(error); diff --git a/drivers/block/zram/zcomp.h b/drivers/block/zram/zcomp.h index ad5762813842..62330829db3f 100644 --- a/drivers/block/zram/zcomp.h +++ b/drivers/block/zram/zcomp.h @@ -3,10 +3,10 @@ #ifndef _ZCOMP_H_ #define _ZCOMP_H_ =20 -#include - #define ZCOMP_PARAM_NO_LEVEL INT_MIN =20 +#include + /* * Immutable driver (backend) parameters. The driver may attach private * data to it (e.g. driver representation of the dictionary, etc.). @@ -31,7 +31,7 @@ struct zcomp_ctx { }; =20 struct zcomp_strm { - local_lock_t lock; + struct list_head entry; /* compression buffer */ void *buffer; struct zcomp_ctx ctx; @@ -60,16 +60,15 @@ struct zcomp_ops { const char *name; }; =20 -/* dynamic per-device compression frontend */ struct zcomp { - struct zcomp_strm __percpu *stream; + struct list_head idle_strm; + spinlock_t strm_lock; + u32 avail_strm; + wait_queue_head_t strm_wait; const struct zcomp_ops *ops; struct zcomp_params *params; - struct hlist_node node; }; =20 -int zcomp_cpu_up_prepare(unsigned int cpu, struct hlist_node *node); -int zcomp_cpu_dead(unsigned int cpu, struct hlist_node *node); ssize_t zcomp_available_show(const char *comp, char *buf); bool zcomp_available_algorithm(const char *comp); =20 @@ -77,7 +76,7 @@ struct zcomp *zcomp_create(const char *alg, struct zcomp_= params *params); void zcomp_destroy(struct zcomp *comp); =20 struct zcomp_strm *zcomp_stream_get(struct zcomp *comp); -void zcomp_stream_put(struct zcomp *comp); +void zcomp_stream_put(struct zcomp *comp, struct zcomp_strm *strm); =20 int zcomp_compress(struct zcomp *comp, struct zcomp_strm *zstrm, const void *src, unsigned int *dst_len); diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c index 7eb7feba3cac..b217c29448ce 100644 --- a/drivers/block/zram/zram_drv.c +++ b/drivers/block/zram/zram_drv.c @@ -31,7 +31,6 @@ #include #include #include -#include #include #include =20 @@ -1606,7 +1605,7 @@ static int read_compressed_page(struct zram *zram, st= ruct page *page, u32 index) ret =3D zcomp_decompress(zram->comps[prio], zstrm, src, size, dst); kunmap_local(dst); zs_unmap_object(zram->mem_pool, handle); - zcomp_stream_put(zram->comps[prio]); + zcomp_stream_put(zram->comps[prio], zstrm); =20 return ret; } @@ -1767,14 +1766,14 @@ static int zram_write_page(struct zram *zram, struc= t page *page, u32 index) kunmap_local(mem); =20 if (unlikely(ret)) { - zcomp_stream_put(zram->comps[ZRAM_PRIMARY_COMP]); + zcomp_stream_put(zram->comps[ZRAM_PRIMARY_COMP], zstrm); pr_err("Compression failed! err=3D%d\n", ret); zs_free(zram->mem_pool, handle); return ret; } =20 if (comp_len >=3D huge_class_size) { - zcomp_stream_put(zram->comps[ZRAM_PRIMARY_COMP]); + zcomp_stream_put(zram->comps[ZRAM_PRIMARY_COMP], zstrm); return write_incompressible_page(zram, page, index); } =20 @@ -1798,7 +1797,7 @@ static int zram_write_page(struct zram *zram, struct = page *page, u32 index) __GFP_HIGHMEM | __GFP_MOVABLE); if (IS_ERR_VALUE(handle)) { - zcomp_stream_put(zram->comps[ZRAM_PRIMARY_COMP]); + zcomp_stream_put(zram->comps[ZRAM_PRIMARY_COMP], zstrm); atomic64_inc(&zram->stats.writestall); handle =3D zs_malloc(zram->mem_pool, comp_len, GFP_NOIO | __GFP_HIGHMEM | @@ -1810,7 +1809,7 @@ static int zram_write_page(struct zram *zram, struct = page *page, u32 index) } =20 if (!zram_can_store_page(zram)) { - zcomp_stream_put(zram->comps[ZRAM_PRIMARY_COMP]); + zcomp_stream_put(zram->comps[ZRAM_PRIMARY_COMP], zstrm); zs_free(zram->mem_pool, handle); return -ENOMEM; } @@ -1818,7 +1817,7 @@ static int zram_write_page(struct zram *zram, struct = page *page, u32 index) dst =3D zs_map_object(zram->mem_pool, handle, ZS_MM_WO); =20 memcpy(dst, zstrm->buffer, comp_len); - zcomp_stream_put(zram->comps[ZRAM_PRIMARY_COMP]); + zcomp_stream_put(zram->comps[ZRAM_PRIMARY_COMP], zstrm); zs_unmap_object(zram->mem_pool, handle); =20 zram_slot_write_lock(zram, index); @@ -1977,7 +1976,7 @@ static int recompress_slot(struct zram *zram, u32 ind= ex, struct page *page, kunmap_local(src); =20 if (ret) { - zcomp_stream_put(zram->comps[prio]); + zcomp_stream_put(zram->comps[prio], zstrm); return ret; } =20 @@ -1987,7 +1986,7 @@ static int recompress_slot(struct zram *zram, u32 ind= ex, struct page *page, /* Continue until we make progress */ if (class_index_new >=3D class_index_old || (threshold && comp_len_new >=3D threshold)) { - zcomp_stream_put(zram->comps[prio]); + zcomp_stream_put(zram->comps[prio], zstrm); continue; } =20 @@ -2045,13 +2044,13 @@ static int recompress_slot(struct zram *zram, u32 i= ndex, struct page *page, __GFP_HIGHMEM | __GFP_MOVABLE); if (IS_ERR_VALUE(handle_new)) { - zcomp_stream_put(zram->comps[prio]); + zcomp_stream_put(zram->comps[prio], zstrm); return PTR_ERR((void *)handle_new); } =20 dst =3D zs_map_object(zram->mem_pool, handle_new, ZS_MM_WO); memcpy(dst, zstrm->buffer, comp_len_new); - zcomp_stream_put(zram->comps[prio]); + zcomp_stream_put(zram->comps[prio], zstrm); =20 zs_unmap_object(zram->mem_pool, handle_new); =20 @@ -2799,7 +2798,6 @@ static void destroy_devices(void) zram_debugfs_destroy(); idr_destroy(&zram_index_idr); unregister_blkdev(zram_major, "zram"); - cpuhp_remove_multi_state(CPUHP_ZCOMP_PREPARE); } =20 static int __init zram_init(void) @@ -2809,15 +2807,9 @@ static int __init zram_init(void) =20 BUILD_BUG_ON(__NR_ZRAM_PAGEFLAGS > sizeof(zram_te.flags) * 8); =20 - ret =3D cpuhp_setup_state_multi(CPUHP_ZCOMP_PREPARE, "block/zram:prepare", - zcomp_cpu_up_prepare, zcomp_cpu_dead); - if (ret < 0) - return ret; - ret =3D class_register(&zram_control_class); if (ret) { pr_err("Unable to register zram-control class\n"); - cpuhp_remove_multi_state(CPUHP_ZCOMP_PREPARE); return ret; } =20 @@ -2826,7 +2818,6 @@ static int __init zram_init(void) if (zram_major <=3D 0) { pr_err("Unable to get major number\n"); class_unregister(&zram_control_class); - cpuhp_remove_multi_state(CPUHP_ZCOMP_PREPARE); return -EBUSY; } =20 diff --git a/include/linux/cpuhotplug.h b/include/linux/cpuhotplug.h index 6cc5e484547c..092ace7db8ee 100644 --- a/include/linux/cpuhotplug.h +++ b/include/linux/cpuhotplug.h @@ -119,7 +119,6 @@ enum cpuhp_state { CPUHP_MM_ZS_PREPARE, CPUHP_MM_ZSWP_POOL_PREPARE, CPUHP_KVM_PPC_BOOK3S_PREPARE, - CPUHP_ZCOMP_PREPARE, CPUHP_TIMERS_PREPARE, CPUHP_TMIGR_PREPARE, CPUHP_MIPS_SOC_PREPARE, --=20 2.48.0.rc2.279.g1de40edade-goog