From nobody Mon Feb 9 01:11:32 2026 Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.10]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7B43E23C51D; Sun, 25 Jan 2026 03:36:14 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=192.198.163.10 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769312177; cv=none; b=EkbuxN4baVyD8md6aaXSuz0lC+uEnyxkaBOoglBTZNKkWwKcK2CEFYa3oCn3LjapKGJ1DCuo0++EvT4oyUNlzPujA9Dg/19CLNmp8ADJ/KoapRbSxme8H47a5BFR2DlN4WKucfsRYeLJNph9U6j2AwUep7DQfy5EBlVmhGo94s8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1769312177; c=relaxed/simple; bh=F2hMlOD13E70gvDD2tTetaQxnB+fdFV6/4gQEa5Hk+c=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=nyvQINXUkc4GbpEmadYzar2jpuA6Kn3ZcBzpLF2eTJy81wXZcJGNtDKsda7S5X7GtooHkOg8NX2Vmzyk9VVUCb1lPQ+Jfr3TTI8qwH9XlCO1DT6HjYKFX0yYUKMwVqMtZfhNhExvW3Ba4EUAWHjOdaaQ7l1Ql0oK2aqPugor1pE= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com; spf=pass smtp.mailfrom=intel.com; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b=W3smI74f; arc=none smtp.client-ip=192.198.163.10 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=intel.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=intel.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=intel.com header.i=@intel.com header.b="W3smI74f" DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1769312174; x=1800848174; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=F2hMlOD13E70gvDD2tTetaQxnB+fdFV6/4gQEa5Hk+c=; b=W3smI74fm6jNxVKq+FPj0MWneMI6dFwqsIucAXNwF2n7YytauAnRFbIa bvgPgZwhdgsAzXQdMV3jhbQ0l1DN0aDWXjJGlvwLxyb72l1fqmR1/q3xF woFLbVxViH8eOV7+jlYFOtVOp2M73zFAUzmmqUNFL7HpWUUmlXyUrwFmr X4xb7iVtMl+DjhiZkQY/Rjmag8Jd+v47DIjpk8bqRvNbSLnx7tVWeCUKf wjdHIN+xy/o8YLVxyOc8j99AD/8RS7wwywBJy+uG4YMtyRyXHHES8xzYt eT6fEQTMH32Li1qMXO9IbdzHOq/Gc8UHij7VPyykDCS26JdZOHLVV6iHV Q==; X-CSE-ConnectionGUID: m6M0bwG/R02qS8+YFmDq7Q== X-CSE-MsgGUID: k0yfMcFHSQGyztJAL112ZQ== X-IronPort-AV: E=McAfee;i="6800,10657,11681"; a="81887649" X-IronPort-AV: E=Sophos;i="6.21,252,1763452800"; d="scan'208";a="81887649" Received: from fmviesa003.fm.intel.com ([10.60.135.143]) by fmvoesa104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Jan 2026 19:36:12 -0800 X-CSE-ConnectionGUID: 7h1tBnv/Qv2oZ2BK/yZBEQ== X-CSE-MsgGUID: T1LoO/gHSf2TxYUbPWWROA== X-ExtLoop1: 1 Received: from jf5300-b11a338t.jf.intel.com ([10.242.51.115]) by fmviesa003.fm.intel.com with ESMTP; 24 Jan 2026 19:36:11 -0800 From: Kanchana P Sridhar To: linux-kernel@vger.kernel.org, linux-mm@kvack.org, hannes@cmpxchg.org, yosry.ahmed@linux.dev, nphamcs@gmail.com, chengming.zhou@linux.dev, usamaarif642@gmail.com, ryan.roberts@arm.com, 21cnbao@gmail.com, ying.huang@linux.alibaba.com, akpm@linux-foundation.org, senozhatsky@chromium.org, sj@kernel.org, kasong@tencent.com, linux-crypto@vger.kernel.org, herbert@gondor.apana.org.au, davem@davemloft.net, clabbe@baylibre.com, ardb@kernel.org, ebiggers@google.com, surenb@google.com, kristen.c.accardi@intel.com, vinicius.gomes@intel.com, giovanni.cabiddu@intel.com Cc: wajdi.k.feghali@intel.com, kanchana.p.sridhar@intel.com Subject: [PATCH v14 23/26] mm: zswap: Tie per-CPU acomp_ctx lifetime to the pool. Date: Sat, 24 Jan 2026 19:35:34 -0800 Message-Id: <20260125033537.334628-24-kanchana.p.sridhar@intel.com> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20260125033537.334628-1-kanchana.p.sridhar@intel.com> References: <20260125033537.334628-1-kanchana.p.sridhar@intel.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Currently, per-CPU acomp_ctx are allocated on pool creation and/or CPU hotplug, and destroyed on pool destruction or CPU hotunplug. This complicates the lifetime management to save memory while a CPU is offlined, which is not very common. Simplify lifetime management by allocating per-CPU acomp_ctx once on pool creation (or CPU hotplug for CPUs onlined later), and keeping them allocated until the pool is destroyed. Refactor cleanup code from zswap_cpu_comp_dead() into acomp_ctx_dealloc() to be used elsewhere. The main benefit of using the CPU hotplug multi state instance startup callback to allocate the acomp_ctx resources is that it prevents the cores from being offlined until the multi state instance addition call returns. From Documentation/core-api/cpu_hotplug.rst: "The node list add/remove operations and the callback invocations are serialized against CPU hotplug operations." Furthermore, zswap_[de]compress() cannot contend with zswap_cpu_comp_prepare() because: - During pool creation/deletion, the pool is not in the zswap_pools list. - During CPU hot[un]plug, the CPU is not yet online, as Yosry pointed out. zswap_cpu_comp_prepare() will be run on a control CPU, since CPUHP_MM_ZSWP_POOL_PREPARE is in the PREPARE section of "enum cpuhp_state". In both these cases, any recursions into zswap reclaim from zswap_cpu_comp_prepare() will be handled by the old pool. The above two observations enable the following simplifications: 1) zswap_cpu_comp_prepare(): a) acomp_ctx mutex locking: If the process gets migrated while zswap_cpu_comp_prepare() is running, it will complete on the new CPU. In case of failures, we pass the acomp_ctx pointer obtained at the start of zswap_cpu_comp_prepare() to acomp_ctx_dealloc(), which again, can only undergo migration. There appear to be no contention scenarios that might cause inconsistent values of acomp_ctx's members. Hence, it seems there is no need for mutex_lock(&acomp_ctx->mutex) in zswap_cpu_comp_prepare(). b) acomp_ctx mutex initialization: Since the pool is not yet on zswap_pools list, we don't need to initialize the per-CPU acomp_ctx mutex in zswap_pool_create(). This has been restored to occur in zswap_cpu_comp_prepare(). c) Subsequent CPU offline-online transitions: zswap_cpu_comp_prepare() checks upfront if acomp_ctx->acomp is valid. If so, it returns success. This should handle any CPU hotplug online-offline transitions after pool creation is done. 2) CPU offline vis-a-vis zswap ops: Let's suppose the process is migrated to another CPU before the current CPU is dysfunctional. If zswap_[de]compress() holds the acomp_ctx->mutex lock of the offlined CPU, that mutex will be released once it completes on the new CPU. Since there is no teardown callback, there is no possibility of UAF. 3) Pool creation/deletion and process migration to another CPU: During pool creation/deletion, the pool is not in the zswap_pools list. Hence it cannot contend with zswap ops on that CPU. However, the process can get migrated. a) Pool creation --> zswap_cpu_comp_prepare() --> process migrated: * Old CPU offline: no-op. * zswap_cpu_comp_prepare() continues to run on the new CPU to finish allocating acomp_ctx resources for the offlined CPU. b) Pool deletion --> acomp_ctx_dealloc() --> process migrated: * Old CPU offline: no-op. * acomp_ctx_dealloc() continues to run on the new CPU to finish de-allocating acomp_ctx resources for the offlined CPU. 4) Pool deletion vis-a-vis CPU onlining: The call to cpuhp_state_remove_instance() cannot race with zswap_cpu_comp_prepare() because of hotplug synchronization. The current acomp_ctx_get_cpu_lock()/acomp_ctx_put_unlock() are deleted. Instead, zswap_[de]compress() directly call mutex_[un]lock(&acomp_ctx->mutex). The per-CPU memory cost of not deleting the acomp_ctx resources upon CPU offlining, and only deleting them when the pool is destroyed, is as follows, on x86_64: IAA with 8 dst buffers for batching: 64.34 KB Software compressors with 1 dst buffer: 8.28 KB This cost is only paid when a CPU is offlined, until it is onlined again. Signed-off-by: Kanchana P Sridhar Acked-by: Yosry Ahmed --- mm/zswap.c | 164 +++++++++++++++++++++-------------------------------- 1 file changed, 66 insertions(+), 98 deletions(-) diff --git a/mm/zswap.c b/mm/zswap.c index 038e240c03dd..9480d54264e4 100644 --- a/mm/zswap.c +++ b/mm/zswap.c @@ -241,6 +241,20 @@ static inline struct xarray *swap_zswap_tree(swp_entry= _t swp) **********************************/ static void __zswap_pool_empty(struct percpu_ref *ref); =20 +static void acomp_ctx_dealloc(struct crypto_acomp_ctx *acomp_ctx) +{ + if (IS_ERR_OR_NULL(acomp_ctx)) + return; + + if (!IS_ERR_OR_NULL(acomp_ctx->req)) + acomp_request_free(acomp_ctx->req); + + if (!IS_ERR_OR_NULL(acomp_ctx->acomp)) + crypto_free_acomp(acomp_ctx->acomp); + + kfree(acomp_ctx->buffer); +} + static struct zswap_pool *zswap_pool_create(char *compressor) { struct zswap_pool *pool; @@ -262,19 +276,27 @@ static struct zswap_pool *zswap_pool_create(char *com= pressor) =20 strscpy(pool->tfm_name, compressor, sizeof(pool->tfm_name)); =20 - pool->acomp_ctx =3D alloc_percpu(*pool->acomp_ctx); + /* Many things rely on the zero-initialization. */ + pool->acomp_ctx =3D alloc_percpu_gfp(*pool->acomp_ctx, + GFP_KERNEL | __GFP_ZERO); if (!pool->acomp_ctx) { pr_err("percpu alloc failed\n"); goto error; } =20 - for_each_possible_cpu(cpu) - mutex_init(&per_cpu_ptr(pool->acomp_ctx, cpu)->mutex); - + /* + * This is serialized against CPU hotplug operations. Hence, cores + * cannot be offlined until this finishes. + */ ret =3D cpuhp_state_add_instance(CPUHP_MM_ZSWP_POOL_PREPARE, &pool->node); + + /* + * cpuhp_state_add_instance() will not cleanup on failure since + * we don't register a hotunplug callback. + */ if (ret) - goto error; + goto cpuhp_add_fail; =20 /* being the current pool takes 1 ref; this func expects the * caller to always add the new pool as the current pool @@ -291,6 +313,10 @@ static struct zswap_pool *zswap_pool_create(char *comp= ressor) =20 ref_fail: cpuhp_state_remove_instance(CPUHP_MM_ZSWP_POOL_PREPARE, &pool->node); + +cpuhp_add_fail: + for_each_possible_cpu(cpu) + acomp_ctx_dealloc(per_cpu_ptr(pool->acomp_ctx, cpu)); error: if (pool->acomp_ctx) free_percpu(pool->acomp_ctx); @@ -321,9 +347,15 @@ static struct zswap_pool *__zswap_pool_create_fallback= (void) =20 static void zswap_pool_destroy(struct zswap_pool *pool) { + int cpu; + zswap_pool_debug("destroying", pool); =20 cpuhp_state_remove_instance(CPUHP_MM_ZSWP_POOL_PREPARE, &pool->node); + + for_each_possible_cpu(cpu) + acomp_ctx_dealloc(per_cpu_ptr(pool->acomp_ctx, cpu)); + free_percpu(pool->acomp_ctx); =20 zs_destroy_pool(pool->zs_pool); @@ -735,39 +767,36 @@ static int zswap_cpu_comp_prepare(unsigned int cpu, s= truct hlist_node *node) { struct zswap_pool *pool =3D hlist_entry(node, struct zswap_pool, node); struct crypto_acomp_ctx *acomp_ctx =3D per_cpu_ptr(pool->acomp_ctx, cpu); - struct crypto_acomp *acomp =3D NULL; - struct acomp_req *req =3D NULL; - u8 *buffer =3D NULL; - int ret; + int ret =3D -ENOMEM; =20 - buffer =3D kmalloc_node(PAGE_SIZE, GFP_KERNEL, cpu_to_node(cpu)); - if (!buffer) { - ret =3D -ENOMEM; - goto fail; + /* + * To handle cases where the CPU goes through online-offline-online + * transitions, we return if the acomp_ctx has already been initialized. + */ + if (acomp_ctx->acomp) { + WARN_ON_ONCE(IS_ERR(acomp_ctx->acomp)); + return 0; } =20 - acomp =3D crypto_alloc_acomp_node(pool->tfm_name, 0, 0, cpu_to_node(cpu)); - if (IS_ERR(acomp)) { + acomp_ctx->buffer =3D kmalloc_node(PAGE_SIZE, GFP_KERNEL, cpu_to_node(cpu= )); + if (!acomp_ctx->buffer) + return ret; + + acomp_ctx->acomp =3D crypto_alloc_acomp_node(pool->tfm_name, 0, 0, cpu_to= _node(cpu)); + if (IS_ERR(acomp_ctx->acomp)) { pr_err("could not alloc crypto acomp %s : %ld\n", - pool->tfm_name, PTR_ERR(acomp)); - ret =3D PTR_ERR(acomp); + pool->tfm_name, PTR_ERR(acomp_ctx->acomp)); + ret =3D PTR_ERR(acomp_ctx->acomp); goto fail; } =20 - req =3D acomp_request_alloc(acomp); - if (!req) { + acomp_ctx->req =3D acomp_request_alloc(acomp_ctx->acomp); + if (!acomp_ctx->req) { pr_err("could not alloc crypto acomp_request %s\n", pool->tfm_name); - ret =3D -ENOMEM; goto fail; } =20 - /* - * Only hold the mutex after completing allocations, otherwise we may - * recurse into zswap through reclaim and attempt to hold the mutex - * again resulting in a deadlock. - */ - mutex_lock(&acomp_ctx->mutex); crypto_init_wait(&acomp_ctx->wait); =20 /* @@ -775,83 +804,19 @@ static int zswap_cpu_comp_prepare(unsigned int cpu, s= truct hlist_node *node) * crypto_wait_req(); if the backend of acomp is scomp, the callback * won't be called, crypto_wait_req() will return without blocking. */ - acomp_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG, + acomp_request_set_callback(acomp_ctx->req, CRYPTO_TFM_REQ_MAY_BACKLOG, crypto_req_done, &acomp_ctx->wait); =20 - acomp_ctx->buffer =3D buffer; - acomp_ctx->acomp =3D acomp; - acomp_ctx->req =3D req; - acomp_request_set_unit_size(acomp_ctx->req, PAGE_SIZE); =20 - mutex_unlock(&acomp_ctx->mutex); + mutex_init(&acomp_ctx->mutex); return 0; =20 fail: - if (!IS_ERR_OR_NULL(acomp)) - crypto_free_acomp(acomp); - kfree(buffer); + acomp_ctx_dealloc(acomp_ctx); return ret; } =20 -static int zswap_cpu_comp_dead(unsigned int cpu, struct hlist_node *node) -{ - struct zswap_pool *pool =3D hlist_entry(node, struct zswap_pool, node); - struct crypto_acomp_ctx *acomp_ctx =3D per_cpu_ptr(pool->acomp_ctx, cpu); - struct acomp_req *req; - struct crypto_acomp *acomp; - u8 *buffer; - - if (IS_ERR_OR_NULL(acomp_ctx)) - return 0; - - mutex_lock(&acomp_ctx->mutex); - req =3D acomp_ctx->req; - acomp =3D acomp_ctx->acomp; - buffer =3D acomp_ctx->buffer; - acomp_ctx->req =3D NULL; - acomp_ctx->acomp =3D NULL; - acomp_ctx->buffer =3D NULL; - mutex_unlock(&acomp_ctx->mutex); - - /* - * Do the actual freeing after releasing the mutex to avoid subtle - * locking dependencies causing deadlocks. - */ - if (!IS_ERR_OR_NULL(req)) - acomp_request_free(req); - if (!IS_ERR_OR_NULL(acomp)) - crypto_free_acomp(acomp); - kfree(buffer); - - return 0; -} - -static struct crypto_acomp_ctx *acomp_ctx_get_cpu_lock(struct zswap_pool *= pool) -{ - struct crypto_acomp_ctx *acomp_ctx; - - for (;;) { - acomp_ctx =3D raw_cpu_ptr(pool->acomp_ctx); - mutex_lock(&acomp_ctx->mutex); - if (likely(acomp_ctx->req)) - return acomp_ctx; - /* - * It is possible that we were migrated to a different CPU after - * getting the per-CPU ctx but before the mutex was acquired. If - * the old CPU got offlined, zswap_cpu_comp_dead() could have - * already freed ctx->req (among other things) and set it to - * NULL. Just try again on the new CPU that we ended up on. - */ - mutex_unlock(&acomp_ctx->mutex); - } -} - -static void acomp_ctx_put_unlock(struct crypto_acomp_ctx *acomp_ctx) -{ - mutex_unlock(&acomp_ctx->mutex); -} - static bool zswap_compress(struct page *page, struct zswap_entry *entry, struct zswap_pool *pool) { @@ -864,7 +829,9 @@ static bool zswap_compress(struct page *page, struct zs= wap_entry *entry, u8 *dst; bool mapped =3D false; =20 - acomp_ctx =3D acomp_ctx_get_cpu_lock(pool); + acomp_ctx =3D raw_cpu_ptr(pool->acomp_ctx); + mutex_lock(&acomp_ctx->mutex); + dst =3D acomp_ctx->buffer; sg_init_table(&input, 1); sg_set_page(&input, page, PAGE_SIZE, 0); @@ -927,7 +894,7 @@ static bool zswap_compress(struct page *page, struct zs= wap_entry *entry, else if (alloc_ret) zswap_reject_alloc_fail++; =20 - acomp_ctx_put_unlock(acomp_ctx); + mutex_unlock(&acomp_ctx->mutex); return comp_ret =3D=3D 0 && alloc_ret =3D=3D 0; } =20 @@ -939,7 +906,8 @@ static bool zswap_decompress(struct zswap_entry *entry,= struct folio *folio) int decomp_ret =3D 0, dlen =3D PAGE_SIZE; u8 *src, *obj; =20 - acomp_ctx =3D acomp_ctx_get_cpu_lock(pool); + acomp_ctx =3D raw_cpu_ptr(pool->acomp_ctx); + mutex_lock(&acomp_ctx->mutex); obj =3D zs_obj_read_begin(pool->zs_pool, entry->handle, entry->length, acomp_ctx->buffer); =20 @@ -971,7 +939,7 @@ static bool zswap_decompress(struct zswap_entry *entry,= struct folio *folio) =20 read_done: zs_obj_read_end(pool->zs_pool, entry->handle, entry->length, obj); - acomp_ctx_put_unlock(acomp_ctx); + mutex_unlock(&acomp_ctx->mutex); =20 if (!decomp_ret && dlen =3D=3D PAGE_SIZE) return true; @@ -1797,7 +1765,7 @@ static int zswap_setup(void) ret =3D cpuhp_setup_state_multi(CPUHP_MM_ZSWP_POOL_PREPARE, "mm/zswap_pool:prepare", zswap_cpu_comp_prepare, - zswap_cpu_comp_dead); + NULL); if (ret) goto hp_fail; =20 --=20 2.27.0