From nobody Wed Apr 1 08:17:26 2026 Received: from mail-dl1-f49.google.com (mail-dl1-f49.google.com [74.125.82.49]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id F408237DEA9 for ; Tue, 31 Mar 2026 18:34:06 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=74.125.82.49 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774982048; cv=none; b=kdiRG2NO2Va2m5PLZe5prBffrtIYmyRrad2pM06tyzQlD9q6624Cpee78cAmLtdaqVNXv05zRb+D70Wcd8av3o9DF5o6bhJ1PuG5DMkEQwUwMZSfJMfpUvNk5mIT+0vdS6u+SOnuFXcjcy7Vzx8D40MVyZsItbebPOLTp2ct4Z0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774982048; c=relaxed/simple; bh=0MD/xBcN2w+qLvAUrmfPCI6CHkUmIcCnBbBCUTnDbow=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=LBWuLnqmFz4QteBn7yn8irp0GEoKGc0wgnRXrYkHAW7tfpcuwFgE5UKyUAWLSIu2XDK4aki+mYeeITjlelH0A+o3iownZDofqYcDXtoFTLxAJVzOmchcS/wMFaS0i9LU2R8Ya6Gj4SAcAxOTk5XwZGrQ6StlBe8oBIf7c7rtYlA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=LP0VEeuz; arc=none smtp.client-ip=74.125.82.49 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="LP0VEeuz" Received: by mail-dl1-f49.google.com with SMTP id a92af1059eb24-127380532eeso1299632c88.1 for ; Tue, 31 Mar 2026 11:34:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20251104; t=1774982046; x=1775586846; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=8Z9k6IBVmnZKroX7/272gWXi20ZJmLT8jW6F8/KWzUQ=; b=LP0VEeuzlniZdle2+azsEIiEXlDydu/GHJGTOIPv6PKH2Hz5DYwGF1LKkdVen8feYE BJC2LvdapyjJFUb6APrsK++Hv4vplCqK6ZhNOP57rkxVVi+hbDw89lof6QkR9CkvyR6F ukA89f6MGq6Yvi0uzyHegR8L0ifUPS/SLdFKoSDlgUmZqQ2YD7C40QbnWcVq4dhgi5j2 N06FDq0CxzOhf0K2TjP+Hyfokfsyr+LaSkSndggtXJpH3q+qUtTdmBTkbJlQXDYt8Qg7 HtgWoDDb0Ro0XhViF7lQ4Judo3RTsrFKrXrc6bhvpE/BIxwN37VHJU3rjdUp1ZL7vTr/ 2Dgw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1774982046; x=1775586846; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=8Z9k6IBVmnZKroX7/272gWXi20ZJmLT8jW6F8/KWzUQ=; b=oOAtsnGc8gyct9dFXj4zKcclYpomlNqze5m2yUkFhvvMXGt/MWe7EAtz+S/xdyCkOz 2VoDU94N5ts1I2zj8IH4KbGIPuWeUMsrXJ7lK4bG+vTQcsF3cIPlEWH3wcje41LKQ4Cg s/JwWjbWoLxWX3x9HIzDvxJ7FEq2O18qVts07mvQXUChHxZqfk38sRDmqZOou0j9bc06 BYAwgcndLXkfD4wDTZicmbO6VwOB8pO4XIFOSVSpQBGQvO6lHpKowBUceWSHu3gDBovV KHITSVCe0Q8SA9EKd6zXx2kQLsmjM9RCc/ecVbvFSvcINs1SowbSM9ovXKodPRHMcG7l IzPg== X-Forwarded-Encrypted: i=1; AJvYcCUyFTIzJzmccP6OoN8D66Zuy9q0rK1OjDoDyYMpKpE4BXoOBTINs2T6t0PuM5K7bhlaTEMHo9hOclrOes8=@vger.kernel.org X-Gm-Message-State: AOJu0Yw+iljKUqbJqX4sOg3aZZr4dr3F/4uJiYn737Z2+Pcsx0KIcQ1E ioDPMfEZSgFr9eOe/zhjddX0sYuUxcGHSCZqt1IjWqhMFIvKw50xX41R X-Gm-Gg: ATEYQzwlTYOQHIICfWFAUv+M7WPikSaVpiVdb+/a5qT8L4/k98E2JwLeOtoTthYXSFx xkUkkw20DJqz/b3n+wzBIm45Uqm2GROFyQGcWt+BN17gnlo1fOD1VxL0ssFpVUEDD8oV/DMykDs T6PqiJKzqMOQ1HaEZ/TamTp9pJ4Al7u/ZVPojcO7upoPrukD0g2anbAE19nfgdrZSxYOz/6jCkT UPF3Timv05Qy2pPZVl+LQJ1X4D1pzU15++x2r5cChA2AdJEFT/k3YVLX4UuDKUNWQQSf7c17bgq dAlgMrjQzGjKB/OnfLPbzq4p/3PserdWFuOuN77EylJncitx80/Fn/lOUdKP9u+P/T1rHvR/CCy KyyqDVwZHnuDzzmaGhktFs38hoL3A+9H4Z5rlYcisZrNdCMABWlFUqYO5AcXbBwxLRB5gjWnY05 3j3GvmUi+x5m/2cMGK0DuvIOHMnVCFCcrONwf3TGY8fbzVr8ui X-Received: by 2002:a05:7022:f697:b0:128:cedb:33ba with SMTP id a92af1059eb24-12be645523dmr398491c88.10.1774982046018; Tue, 31 Mar 2026 11:34:06 -0700 (PDT) Received: from penguin.lxd ([2601:647:6400:3ec0:216:3eff:fecd:e4ef]) by smtp.gmail.com with ESMTPSA id a92af1059eb24-12ac4a0fa8dsm7961271c88.15.2026.03.31.11.34.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 31 Mar 2026 11:34:05 -0700 (PDT) From: "Kanchana P. Sridhar" To: hannes@cmpxchg.org, yosry@kernel.org, nphamcs@gmail.com, chengming.zhou@linux.dev, akpm@linux-foundation.org, kanchanapsridhar2026@gmail.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: herbert@gondor.apana.org.au, senozhatsky@chromium.org Subject: [PATCH v3 1/2] mm: zswap: Remove redundant checks in zswap_cpu_comp_dead(). Date: Tue, 31 Mar 2026 11:33:50 -0700 Message-Id: <20260331183351.29844-2-kanchanapsridhar2026@gmail.com> X-Mailer: git-send-email 2.39.5 In-Reply-To: <20260331183351.29844-1-kanchanapsridhar2026@gmail.com> References: <20260331183351.29844-1-kanchanapsridhar2026@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" There are presently redundant checks on the per-CPU acomp_ctx and it's "req" member in zswap_cpu_comp_dead(): redundant because they are inconsistent with zswap_pool_create() handling of failure in allocating the acomp_ctx, and with the expected NULL return value from the acomp_request_alloc() API when it fails to allocate an acomp_req. Fix these by converting to them to be NULL checks. Add comments in zswap_cpu_comp_prepare() clarifying the expected return values of the crypto_alloc_acomp_node() and acomp_request_alloc() API. Suggested-by: Yosry Ahmed Signed-off-by: Kanchana P. Sridhar Acked-by: Yosry Ahmed --- mm/zswap.c | 12 ++++++++++-- 1 file changed, 10 insertions(+), 2 deletions(-) diff --git a/mm/zswap.c b/mm/zswap.c index 4f2e652e8ad3..c59045b59ffe 100644 --- a/mm/zswap.c +++ b/mm/zswap.c @@ -749,6 +749,10 @@ static int zswap_cpu_comp_prepare(unsigned int cpu, st= ruct hlist_node *node) goto fail; } =20 + /* + * In case of an error, crypto_alloc_acomp_node() returns an + * error pointer, never NULL. + */ acomp =3D crypto_alloc_acomp_node(pool->tfm_name, 0, 0, cpu_to_node(cpu)); if (IS_ERR(acomp)) { pr_err("could not alloc crypto acomp %s : %pe\n", @@ -757,6 +761,7 @@ static int zswap_cpu_comp_prepare(unsigned int cpu, str= uct hlist_node *node) goto fail; } =20 + /* acomp_request_alloc() returns NULL in case of an error. */ req =3D acomp_request_alloc(acomp); if (!req) { pr_err("could not alloc crypto acomp_request %s\n", @@ -802,7 +807,7 @@ static int zswap_cpu_comp_dead(unsigned int cpu, struct= hlist_node *node) struct crypto_acomp *acomp; u8 *buffer; =20 - if (IS_ERR_OR_NULL(acomp_ctx)) + if (!acomp_ctx) return 0; =20 mutex_lock(&acomp_ctx->mutex); @@ -817,8 +822,11 @@ static int zswap_cpu_comp_dead(unsigned int cpu, struc= t hlist_node *node) /* * Do the actual freeing after releasing the mutex to avoid subtle * locking dependencies causing deadlocks. + * + * If there was an error in allocating @acomp_ctx->req, it + * would be set to NULL. */ - if (!IS_ERR_OR_NULL(req)) + if (req) acomp_request_free(req); if (!IS_ERR_OR_NULL(acomp)) crypto_free_acomp(acomp); --=20 2.39.5 From nobody Wed Apr 1 08:17:26 2026 Received: from mail-dl1-f54.google.com (mail-dl1-f54.google.com [74.125.82.54]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 39CDA38B152 for ; Tue, 31 Mar 2026 18:34:08 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=74.125.82.54 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774982049; cv=none; b=Gp1x5G65hVfvxTA6VBARm//ikVgXtTOiM0AdrGLjvsTH2SWGqcnyT3Ybsn7UnNL2nZoTP17pqF+du5QoStIPugNPAKgQhe7wsKqvuUxPyILaI9AiUGU6nzLCjA+8Sux1Wb6QN/PesjYWtq1u0mgDGjCFA67FpnXVVgNbOkodgI4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774982049; c=relaxed/simple; bh=sET2Ho3ZZ3ftPFpIVjK+DFjDc17ajf8/vlUmy159Fm8=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=f/kSafXQl7j8wcjGelIYwttTY1nHjupFEwb3PlkqM3WxDakEljJipzG2W4izf7gMSAB4iYxqPq78E1+ne70NkZZ4o7Y/qY02M2/Bwx6aNPyuU2koSqFoQ3xxkSr7hB5eS2nFCvql7DqTLfm5YuXhuLY/SUN3tSE/yWEHjKBjtY4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com; spf=pass smtp.mailfrom=gmail.com; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b=nUFm4wtl; arc=none smtp.client-ip=74.125.82.54 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=gmail.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=gmail.com header.i=@gmail.com header.b="nUFm4wtl" Received: by mail-dl1-f54.google.com with SMTP id a92af1059eb24-126ea4e9694so1014062c88.1 for ; Tue, 31 Mar 2026 11:34:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20251104; t=1774982047; x=1775586847; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=+VQr8VStIW3khugMggzICz/R/RgxNPfHVnCtjXQuBpc=; b=nUFm4wtl9QILwoeAvZMf/ivRZp68SisQ0ZzFZj/b2aPVNBIaGuiuqfCUStA0yezCmX 4v5nlUwelJa8d+uOO2LDpzH4HzZHlSpNmgeRG6He3yKGtUERhdtqi3OM+v/FCiQGd38N TM99D+j06j6cSMvEWLsZbh6TicFdExYlC0psqzUFMoIqxoEjfYJE3woG7YGGHXniTP27 bgFj8JNO7xSUc2YFrZzr7rkL++l8oE+dOrJEDLV5txXc+fkxplwfIe2p4aFSARpM7NfO mxtenVHTF6/YAw1KTKdySYMM1xM5+p/ZJwIrRp+SpcoKnF6QBqPqGKRk7b3bKxKnRWS0 UBjQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20251104; t=1774982047; x=1775586847; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-gg:x-gm-message-state:from :to:cc:subject:date:message-id:reply-to; bh=+VQr8VStIW3khugMggzICz/R/RgxNPfHVnCtjXQuBpc=; b=XUCRtIsIb4v3Nq1uFZ2+AkzJ6+QMBqdgMcWMJeW8CrbXh40ZV9YRAkqm2GtAV7bZAM 8F61Tt5WmST2JW0/Dle7xSoKwYDgTM2ffep+dcGV7+f8iJeTSbZtB3izqb78C6BonPer u56MAq/3M+9sUp4iY9D6o50hY65m5GX/KZeRU3v9pigsxQFpwi53DJG9biV+1P4TGwJ0 jYGPEq0XoZVfnOlOdB3NnDMMPWEVprWJ+BQ80K4llQ1o2oFU4Yiv4GwljOPpE/+x5FH9 6iryV2EjojYVeVWh/NpT54b6ExRM/yD/AnrPreY5U5remei4sqJULEgaBEzKermWUOWW LZzg== X-Forwarded-Encrypted: i=1; AJvYcCW/FRkeI4kmbjLyZlf9LE4BzTUgLFqKdY7Ms09+MLE98hKht+rQKQwGIgwae624giOP0kFcRyjq/Ew/4m4=@vger.kernel.org X-Gm-Message-State: AOJu0YyIZR0fia/GZrA0Afk70Dr4M6XxggLYrd65mzz3nMMZlsvODKfL LwcezZF70zJ37RzIDJ4dDMgsQu1t+kYNtrt3zqM23dyBkRkMUrD36Yk9 X-Gm-Gg: ATEYQzxnMFwkOGularztae4MVOByjjwzlpG0N3qAWpM4Z0r9LRYS8IFOwx/gOuVTWbb v0L1QeX4txZdpkYheEnTf84n6WQX66MJG57DXmKWHSErEOnR9+B0TzIydvn2wqThMP5ZTiLXW1d 0zJPZ42r13IPgqhX7ikB6Io6PHzhk73/BqIjvS0SEeqZ3sFGfMUzdQrqvNop7wZhRfgjSkTCeje NDwYKgZTQfdT5d6SzkTS66umqG9itghmsPQYSSbfLfSc9jMR4pkas1GfGYJmVEwrs2gHjdW84kl WtqZ39tFziD6zAa//XgixAhZVV+V8bH9aT+zZm2vZeI0852qoBn3oursocI9Qu7GzHAO3VzRrbN C8tgU+gUFNeZcyd4CplN5QsW1SmL+FcGqk3/1sRP81KyxXpOcSgmKk9aR+DzAZXngqMXEoq3uv6 V+Jw6DfPQt2M1MikrYqugOTOSa1oXYMAYmjL42Yg== X-Received: by 2002:a05:7022:ea22:b0:11a:61ef:7949 with SMTP id a92af1059eb24-12be6473dccmr353198c88.9.1774982046977; Tue, 31 Mar 2026 11:34:06 -0700 (PDT) Received: from penguin.lxd ([2601:647:6400:3ec0:216:3eff:fecd:e4ef]) by smtp.gmail.com with ESMTPSA id a92af1059eb24-12ac4a0fa8dsm7961271c88.15.2026.03.31.11.34.06 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 31 Mar 2026 11:34:06 -0700 (PDT) From: "Kanchana P. Sridhar" To: hannes@cmpxchg.org, yosry@kernel.org, nphamcs@gmail.com, chengming.zhou@linux.dev, akpm@linux-foundation.org, kanchanapsridhar2026@gmail.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Cc: herbert@gondor.apana.org.au, senozhatsky@chromium.org Subject: [PATCH v3 2/2] mm: zswap: Tie per-CPU acomp_ctx lifetime to the pool. Date: Tue, 31 Mar 2026 11:33:51 -0700 Message-Id: <20260331183351.29844-3-kanchanapsridhar2026@gmail.com> X-Mailer: git-send-email 2.39.5 In-Reply-To: <20260331183351.29844-1-kanchanapsridhar2026@gmail.com> References: <20260331183351.29844-1-kanchanapsridhar2026@gmail.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Currently, per-CPU acomp_ctx are allocated on pool creation and/or CPU hotplug, and destroyed on pool destruction or CPU hotunplug. This complicates the lifetime management to save memory while a CPU is offlined, which is not very common. Simplify lifetime management by allocating per-CPU acomp_ctx once on pool creation (or CPU hotplug for CPUs onlined later), and keeping them allocated until the pool is destroyed. Refactor cleanup code from zswap_cpu_comp_dead() into acomp_ctx_free() to be used elsewhere. The main benefit of using the CPU hotplug multi state instance startup callback to allocate the acomp_ctx resources is that it prevents the cores from being offlined until the multi state instance addition call returns. From Documentation/core-api/cpu_hotplug.rst: "The node list add/remove operations and the callback invocations are serialized against CPU hotplug operations." Furthermore, zswap_[de]compress() cannot contend with zswap_cpu_comp_prepare() because: - During pool creation/deletion, the pool is not in the zswap_pools list. - During CPU hot[un]plug, the CPU is not yet online, as Yosry pointed out. zswap_cpu_comp_prepare() will be run on a control CPU, since CPUHP_MM_ZSWP_POOL_PREPARE is in the PREPARE section of "enum cpuhp_state". In both these cases, any recursions into zswap reclaim from zswap_cpu_comp_prepare() will be handled by the old pool. The above two observations enable the following simplifications: 1) zswap_cpu_comp_prepare(): a) acomp_ctx mutex locking: If the process gets migrated while zswap_cpu_comp_prepare() is running, it will complete on the new CPU. In case of failures, we pass the acomp_ctx pointer obtained at the start of zswap_cpu_comp_prepare() to acomp_ctx_free(), which again, can only undergo migration. There appear to be no contention scenarios that might cause inconsistent values of acomp_ctx's members. Hence, it seems there is no need for mutex_lock(&acomp_ctx->mutex) in zswap_cpu_comp_prepare(). b) acomp_ctx mutex initialization: Since the pool is not yet on zswap_pools list, we don't need to initialize the per-CPU acomp_ctx mutex in zswap_pool_create(). This has been restored to occur in zswap_cpu_comp_prepare(). c) Subsequent CPU offline-online transitions: zswap_cpu_comp_prepare() checks upfront if acomp_ctx->acomp is valid. If so, it returns success. This should handle any CPU hotplug online-offline transitions after pool creation is done. 2) CPU offline vis-a-vis zswap ops: Let's suppose the process is migrated to another CPU before the current CPU is dysfunctional. If zswap_[de]compress() holds the acomp_ctx->mutex lock of the offlined CPU, that mutex will be released once it completes on the new CPU. Since there is no teardown callback, there is no possibility of UAF. 3) Pool creation/deletion and process migration to another CPU: During pool creation/deletion, the pool is not in the zswap_pools list. Hence it cannot contend with zswap ops on that CPU. However, the process can get migrated. a) Pool creation --> zswap_cpu_comp_prepare() --> process migrated: * Old CPU offline: no-op. * zswap_cpu_comp_prepare() continues to run on the new CPU to finish allocating acomp_ctx resources for the offlined CPU. b) Pool deletion --> acomp_ctx_free() --> process migrated: * Old CPU offline: no-op. * acomp_ctx_free() continues to run on the new CPU to finish de-allocating acomp_ctx resources for the offlined CPU. 4) Pool deletion vis-a-vis CPU onlining: The call to cpuhp_state_remove_instance() cannot race with zswap_cpu_comp_prepare() because of hotplug synchronization. The current acomp_ctx_get_cpu_lock()/acomp_ctx_put_unlock() are deleted. Instead, zswap_[de]compress() directly call mutex_[un]lock(&acomp_ctx->mutex). The per-CPU memory cost of not deleting the acomp_ctx resources upon CPU offlining, and only deleting them when the pool is destroyed, is 8.28 KB on x86_64. This cost is only paid when a CPU is offlined, until it is onlined again. Co-developed-by: Kanchana P. Sridhar Signed-off-by: Kanchana P. Sridhar Signed-off-by: Kanchana P Sridhar Acked-by: Yosry Ahmed --- mm/zswap.c | 180 ++++++++++++++++++++++++----------------------------- 1 file changed, 80 insertions(+), 100 deletions(-) diff --git a/mm/zswap.c b/mm/zswap.c index c59045b59ffe..4b5149173b0e 100644 --- a/mm/zswap.c +++ b/mm/zswap.c @@ -242,6 +242,34 @@ static inline struct xarray *swap_zswap_tree(swp_entry= _t swp) **********************************/ static void __zswap_pool_empty(struct percpu_ref *ref); =20 +static void acomp_ctx_free(struct crypto_acomp_ctx *acomp_ctx) +{ + if (!acomp_ctx) + return; + + /* + * If there was an error in allocating @acomp_ctx->req, it + * would be set to NULL. + */ + if (acomp_ctx->req) + acomp_request_free(acomp_ctx->req); + + acomp_ctx->req =3D NULL; + + /* + * We have to handle both cases here: an error pointer return from + * crypto_alloc_acomp_node(); and a) NULL initialization by zswap, or + * b) NULL assignment done in a previous call to acomp_ctx_free(). + */ + if (!IS_ERR_OR_NULL(acomp_ctx->acomp)) + crypto_free_acomp(acomp_ctx->acomp); + + acomp_ctx->acomp =3D NULL; + + kfree(acomp_ctx->buffer); + acomp_ctx->buffer =3D NULL; +} + static struct zswap_pool *zswap_pool_create(char *compressor) { struct zswap_pool *pool; @@ -263,19 +291,27 @@ static struct zswap_pool *zswap_pool_create(char *com= pressor) =20 strscpy(pool->tfm_name, compressor, sizeof(pool->tfm_name)); =20 - pool->acomp_ctx =3D alloc_percpu(*pool->acomp_ctx); + /* Many things rely on the zero-initialization. */ + pool->acomp_ctx =3D alloc_percpu_gfp(*pool->acomp_ctx, + GFP_KERNEL | __GFP_ZERO); if (!pool->acomp_ctx) { pr_err("percpu alloc failed\n"); goto error; } =20 - for_each_possible_cpu(cpu) - mutex_init(&per_cpu_ptr(pool->acomp_ctx, cpu)->mutex); - + /* + * This is serialized against CPU hotplug operations. Hence, cores + * cannot be offlined until this finishes. + */ ret =3D cpuhp_state_add_instance(CPUHP_MM_ZSWP_POOL_PREPARE, &pool->node); + + /* + * cpuhp_state_add_instance() will not cleanup on failure since + * we don't register a hotunplug callback. + */ if (ret) - goto error; + goto cpuhp_add_fail; =20 /* being the current pool takes 1 ref; this func expects the * caller to always add the new pool as the current pool @@ -292,6 +328,10 @@ static struct zswap_pool *zswap_pool_create(char *comp= ressor) =20 ref_fail: cpuhp_state_remove_instance(CPUHP_MM_ZSWP_POOL_PREPARE, &pool->node); + +cpuhp_add_fail: + for_each_possible_cpu(cpu) + acomp_ctx_free(per_cpu_ptr(pool->acomp_ctx, cpu)); error: if (pool->acomp_ctx) free_percpu(pool->acomp_ctx); @@ -322,9 +362,15 @@ static struct zswap_pool *__zswap_pool_create_fallback= (void) =20 static void zswap_pool_destroy(struct zswap_pool *pool) { + int cpu; + zswap_pool_debug("destroying", pool); =20 cpuhp_state_remove_instance(CPUHP_MM_ZSWP_POOL_PREPARE, &pool->node); + + for_each_possible_cpu(cpu) + acomp_ctx_free(per_cpu_ptr(pool->acomp_ctx, cpu)); + free_percpu(pool->acomp_ctx); =20 zs_destroy_pool(pool->zs_pool); @@ -738,44 +784,41 @@ static int zswap_cpu_comp_prepare(unsigned int cpu, s= truct hlist_node *node) { struct zswap_pool *pool =3D hlist_entry(node, struct zswap_pool, node); struct crypto_acomp_ctx *acomp_ctx =3D per_cpu_ptr(pool->acomp_ctx, cpu); - struct crypto_acomp *acomp =3D NULL; - struct acomp_req *req =3D NULL; - u8 *buffer =3D NULL; - int ret; + int ret =3D -ENOMEM; =20 - buffer =3D kmalloc_node(PAGE_SIZE, GFP_KERNEL, cpu_to_node(cpu)); - if (!buffer) { - ret =3D -ENOMEM; - goto fail; + /* + * To handle cases where the CPU goes through online-offline-online + * transitions, we return if the acomp_ctx has already been initialized. + */ + if (acomp_ctx->acomp) { + WARN_ON_ONCE(IS_ERR(acomp_ctx->acomp)); + return 0; } =20 + acomp_ctx->buffer =3D kmalloc_node(PAGE_SIZE, GFP_KERNEL, cpu_to_node(cpu= )); + if (!acomp_ctx->buffer) + return ret; + /* * In case of an error, crypto_alloc_acomp_node() returns an * error pointer, never NULL. */ - acomp =3D crypto_alloc_acomp_node(pool->tfm_name, 0, 0, cpu_to_node(cpu)); - if (IS_ERR(acomp)) { + acomp_ctx->acomp =3D crypto_alloc_acomp_node(pool->tfm_name, 0, 0, cpu_to= _node(cpu)); + if (IS_ERR(acomp_ctx->acomp)) { pr_err("could not alloc crypto acomp %s : %pe\n", - pool->tfm_name, acomp); - ret =3D PTR_ERR(acomp); + pool->tfm_name, acomp_ctx->acomp); + ret =3D PTR_ERR(acomp_ctx->acomp); goto fail; } =20 /* acomp_request_alloc() returns NULL in case of an error. */ - req =3D acomp_request_alloc(acomp); - if (!req) { + acomp_ctx->req =3D acomp_request_alloc(acomp_ctx->acomp); + if (!acomp_ctx->req) { pr_err("could not alloc crypto acomp_request %s\n", pool->tfm_name); - ret =3D -ENOMEM; goto fail; } =20 - /* - * Only hold the mutex after completing allocations, otherwise we may - * recurse into zswap through reclaim and attempt to hold the mutex - * again resulting in a deadlock. - */ - mutex_lock(&acomp_ctx->mutex); crypto_init_wait(&acomp_ctx->wait); =20 /* @@ -783,83 +826,17 @@ static int zswap_cpu_comp_prepare(unsigned int cpu, s= truct hlist_node *node) * crypto_wait_req(); if the backend of acomp is scomp, the callback * won't be called, crypto_wait_req() will return without blocking. */ - acomp_request_set_callback(req, CRYPTO_TFM_REQ_MAY_BACKLOG, + acomp_request_set_callback(acomp_ctx->req, CRYPTO_TFM_REQ_MAY_BACKLOG, crypto_req_done, &acomp_ctx->wait); =20 - acomp_ctx->buffer =3D buffer; - acomp_ctx->acomp =3D acomp; - acomp_ctx->req =3D req; - mutex_unlock(&acomp_ctx->mutex); + mutex_init(&acomp_ctx->mutex); return 0; =20 fail: - if (!IS_ERR_OR_NULL(acomp)) - crypto_free_acomp(acomp); - kfree(buffer); + acomp_ctx_free(acomp_ctx); return ret; } =20 -static int zswap_cpu_comp_dead(unsigned int cpu, struct hlist_node *node) -{ - struct zswap_pool *pool =3D hlist_entry(node, struct zswap_pool, node); - struct crypto_acomp_ctx *acomp_ctx =3D per_cpu_ptr(pool->acomp_ctx, cpu); - struct acomp_req *req; - struct crypto_acomp *acomp; - u8 *buffer; - - if (!acomp_ctx) - return 0; - - mutex_lock(&acomp_ctx->mutex); - req =3D acomp_ctx->req; - acomp =3D acomp_ctx->acomp; - buffer =3D acomp_ctx->buffer; - acomp_ctx->req =3D NULL; - acomp_ctx->acomp =3D NULL; - acomp_ctx->buffer =3D NULL; - mutex_unlock(&acomp_ctx->mutex); - - /* - * Do the actual freeing after releasing the mutex to avoid subtle - * locking dependencies causing deadlocks. - * - * If there was an error in allocating @acomp_ctx->req, it - * would be set to NULL. - */ - if (req) - acomp_request_free(req); - if (!IS_ERR_OR_NULL(acomp)) - crypto_free_acomp(acomp); - kfree(buffer); - - return 0; -} - -static struct crypto_acomp_ctx *acomp_ctx_get_cpu_lock(struct zswap_pool *= pool) -{ - struct crypto_acomp_ctx *acomp_ctx; - - for (;;) { - acomp_ctx =3D raw_cpu_ptr(pool->acomp_ctx); - mutex_lock(&acomp_ctx->mutex); - if (likely(acomp_ctx->req)) - return acomp_ctx; - /* - * It is possible that we were migrated to a different CPU after - * getting the per-CPU ctx but before the mutex was acquired. If - * the old CPU got offlined, zswap_cpu_comp_dead() could have - * already freed ctx->req (among other things) and set it to - * NULL. Just try again on the new CPU that we ended up on. - */ - mutex_unlock(&acomp_ctx->mutex); - } -} - -static void acomp_ctx_put_unlock(struct crypto_acomp_ctx *acomp_ctx) -{ - mutex_unlock(&acomp_ctx->mutex); -} - static bool zswap_compress(struct page *page, struct zswap_entry *entry, struct zswap_pool *pool) { @@ -872,7 +849,9 @@ static bool zswap_compress(struct page *page, struct zs= wap_entry *entry, u8 *dst; bool mapped =3D false; =20 - acomp_ctx =3D acomp_ctx_get_cpu_lock(pool); + acomp_ctx =3D raw_cpu_ptr(pool->acomp_ctx); + mutex_lock(&acomp_ctx->mutex); + dst =3D acomp_ctx->buffer; sg_init_table(&input, 1); sg_set_page(&input, page, PAGE_SIZE, 0); @@ -938,7 +917,7 @@ static bool zswap_compress(struct page *page, struct zs= wap_entry *entry, else if (alloc_ret) zswap_reject_alloc_fail++; =20 - acomp_ctx_put_unlock(acomp_ctx); + mutex_unlock(&acomp_ctx->mutex); return comp_ret =3D=3D 0 && alloc_ret =3D=3D 0; } =20 @@ -950,7 +929,8 @@ static bool zswap_decompress(struct zswap_entry *entry,= struct folio *folio) struct crypto_acomp_ctx *acomp_ctx; int ret =3D 0, dlen; =20 - acomp_ctx =3D acomp_ctx_get_cpu_lock(pool); + acomp_ctx =3D raw_cpu_ptr(pool->acomp_ctx); + mutex_lock(&acomp_ctx->mutex); zs_obj_read_sg_begin(pool->zs_pool, entry->handle, input, entry->length); =20 /* zswap entries of length PAGE_SIZE are not compressed. */ @@ -975,7 +955,7 @@ static bool zswap_decompress(struct zswap_entry *entry,= struct folio *folio) } =20 zs_obj_read_sg_end(pool->zs_pool, entry->handle); - acomp_ctx_put_unlock(acomp_ctx); + mutex_unlock(&acomp_ctx->mutex); =20 if (!ret && dlen =3D=3D PAGE_SIZE) return true; @@ -1795,7 +1775,7 @@ static int zswap_setup(void) ret =3D cpuhp_setup_state_multi(CPUHP_MM_ZSWP_POOL_PREPARE, "mm/zswap_pool:prepare", zswap_cpu_comp_prepare, - zswap_cpu_comp_dead); + NULL); if (ret) goto hp_fail; =20 --=20 2.39.5