From nobody Tue Sep 16 12:29:34 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 80850C4708D for ; Tue, 3 Jan 2023 18:45:52 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238575AbjACSpV (ORCPT ); Tue, 3 Jan 2023 13:45:21 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50494 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238216AbjACSot (ORCPT ); Tue, 3 Jan 2023 13:44:49 -0500 Received: from mail-wr1-x42c.google.com (mail-wr1-x42c.google.com [IPv6:2a00:1450:4864:20::42c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 93AED13F85 for ; Tue, 3 Jan 2023 10:43:07 -0800 (PST) Received: by mail-wr1-x42c.google.com with SMTP id bn26so11092931wrb.0 for ; Tue, 03 Jan 2023 10:43:07 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=arista.com; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=JbgIiDs9XOqs3x5JE++9owDgkz4yX2MsWocTUbw0/sU=; b=L3JVkBFECQ6Vd0S2/1jwhsAPPOkYOF3EDGIhQsoRZYDOmmRl3QreTQbB90Of9F0XqZ fGkgd28Ta8jP/UVMHRh9ZtR0E/+TspmLok7bnbjYK1pazHya3HutxhO/FTFRqo/mChhC 6Zxu89mnhUVHAB89wreQmEmuPrMTwY7M0sZJUm6vCG9DYGp9s1U440WLSNemiPrZFejf QQ65A4xqCI9DqoDVuxi/3Hel+RWg+bHO8bXWr51YD6TLO1qxqofQ3Anwcf8r1h53qJuk FymYBCfSdtv2/RWJjeUQmNI8tuSpM28qoSl1BEdEmxRhDjtUVKfqQq5RkK9qm9M+Bk1b u0Ew== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=JbgIiDs9XOqs3x5JE++9owDgkz4yX2MsWocTUbw0/sU=; b=LBKzf6GetyUP29bXUcwgbMANt160E06dkkWHVfwlkfsrZNFIHdZnMJ32+xn/PsAcs4 O6jlR3P2U1Y7kzNSO5EpBDSGsOBun2CHb45qd6cIMgl5iMuaCZIIZ36VlZQ+epw+Sn5s Jsh9vBira3EAZwNarBDpMc7CuOIROp5QfCt+IqXAdbgJXECRLDeWHdit4SAk1xeKQyew QCbGwDwe+7wiEcxgRf4kAxh4PB+l2ku+rkwcUEWD3du714GWle8WN8GLTKpf66iuZaxi thrkAwpYXXq+9sFV3A87xKH3wU0oC79MEUNgI7Q42+v6jF3VbwbJWMrQi55bm1IUc/4I 8nMQ== X-Gm-Message-State: AFqh2kpRJf0cdPR2lsTjYTQfqsRYOIVFInvgXtqPGtCvuFnEEigxePHK 7nXPE+Xt6m+bVAjkzxVbjp/3rMz9PLqh6IwM X-Google-Smtp-Source: AMrXdXvQqdXqcMuLZKJa29c6040pa67s2LWo65W5oOrKnrxtCKAnzLA5RaY/XvMdl0Pq++MPh/ZTlA== X-Received: by 2002:adf:f305:0:b0:277:2e27:61fa with SMTP id i5-20020adff305000000b002772e2761famr22173728wro.9.1672771385796; Tue, 03 Jan 2023 10:43:05 -0800 (PST) Received: from Mindolluin.ire.aristanetworks.com ([217.173.96.166]) by smtp.gmail.com with ESMTPSA id i18-20020a5d5232000000b0028e55b44a99sm13811578wra.17.2023.01.03.10.43.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 03 Jan 2023 10:43:05 -0800 (PST) From: Dmitry Safonov To: linux-kernel@vger.kernel.org, David Ahern , Eric Dumazet , Herbert Xu , Jakub Kicinski , "David S. Miller" Cc: Dmitry Safonov , Andy Lutomirski , Bob Gilligan , Dmitry Safonov <0x7f454c46@gmail.com>, Hideaki YOSHIFUJI , Leonard Crestez , Paolo Abeni , Salam Noureddine , netdev@vger.kernel.org, linux-crypto@vger.kernel.org Subject: [PATCH v2 1/5] crypto: Introduce crypto_pool Date: Tue, 3 Jan 2023 18:42:53 +0000 Message-Id: <20230103184257.118069-2-dima@arista.com> X-Mailer: git-send-email 2.39.0 In-Reply-To: <20230103184257.118069-1-dima@arista.com> References: <20230103184257.118069-1-dima@arista.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Introduce a per-CPU pool of async crypto requests that can be used in bh-disabled contexts (designed with net RX/TX softirqs as users in mind). Allocation can sleep and is a slow-path. Initial implementation has only ahash as a backend and a fix-sized array of possible algorithms used in parallel. Signed-off-by: Dmitry Safonov --- crypto/Kconfig | 6 + crypto/Makefile | 1 + crypto/crypto_pool.c | 291 ++++++++++++++++++++++++++++++++++++++++++ include/crypto/pool.h | 34 +++++ 4 files changed, 332 insertions(+) create mode 100644 crypto/crypto_pool.c create mode 100644 include/crypto/pool.h diff --git a/crypto/Kconfig b/crypto/Kconfig index 9c86f7045157..ba8d4a1f10f9 100644 --- a/crypto/Kconfig +++ b/crypto/Kconfig @@ -1388,6 +1388,12 @@ endmenu config CRYPTO_HASH_INFO bool =20 +config CRYPTO_POOL + tristate "Per-CPU crypto pool" + default n + help + Per-CPU pool of crypto requests ready for usage in atomic contexts. + if !KMSAN # avoid false positives from assembly if ARM source "arch/arm/crypto/Kconfig" diff --git a/crypto/Makefile b/crypto/Makefile index d0126c915834..eed8f61bc93b 100644 --- a/crypto/Makefile +++ b/crypto/Makefile @@ -63,6 +63,7 @@ obj-$(CONFIG_CRYPTO_ACOMP2) +=3D crypto_acompress.o cryptomgr-y :=3D algboss.o testmgr.o =20 obj-$(CONFIG_CRYPTO_MANAGER2) +=3D cryptomgr.o +obj-$(CONFIG_CRYPTO_POOL) +=3D crypto_pool.o obj-$(CONFIG_CRYPTO_USER) +=3D crypto_user.o crypto_user-y :=3D crypto_user_base.o crypto_user-$(CONFIG_CRYPTO_STATS) +=3D crypto_user_stat.o diff --git a/crypto/crypto_pool.c b/crypto/crypto_pool.c new file mode 100644 index 000000000000..37131952c5a7 --- /dev/null +++ b/crypto/crypto_pool.c @@ -0,0 +1,291 @@ +// SPDX-License-Identifier: GPL-2.0-or-later + +#include +#include +#include +#include +#include +#include + +static unsigned long scratch_size =3D DEFAULT_CRYPTO_POOL_SCRATCH_SZ; +static DEFINE_PER_CPU(void *, crypto_pool_scratch); + +struct crypto_pool_entry { + struct ahash_request * __percpu *req; + const char *alg; + struct kref kref; + bool needs_key; +}; + +#define CPOOL_SIZE (PAGE_SIZE/sizeof(struct crypto_pool_entry)) +static struct crypto_pool_entry cpool[CPOOL_SIZE]; +static unsigned int cpool_populated; +static DEFINE_MUTEX(cpool_mutex); + +static int crypto_pool_scratch_alloc(void) +{ + int cpu; + + lockdep_assert_held(&cpool_mutex); + + for_each_possible_cpu(cpu) { + void *scratch =3D per_cpu(crypto_pool_scratch, cpu); + + if (scratch) + continue; + + scratch =3D kmalloc_node(scratch_size, GFP_KERNEL, + cpu_to_node(cpu)); + if (!scratch) + return -ENOMEM; + per_cpu(crypto_pool_scratch, cpu) =3D scratch; + } + return 0; +} + +static void crypto_pool_scratch_free(void) +{ + int cpu; + + lockdep_assert_held(&cpool_mutex); + + for_each_possible_cpu(cpu) { + void *scratch =3D per_cpu(crypto_pool_scratch, cpu); + + if (!scratch) + continue; + per_cpu(crypto_pool_scratch, cpu) =3D NULL; + kfree(scratch); + } +} + +static int __cpool_alloc_ahash(struct crypto_pool_entry *e, const char *al= g) +{ + struct crypto_ahash *hash; + int cpu, ret =3D -ENOMEM; + + e->alg =3D kstrdup(alg, GFP_KERNEL); + if (!e->alg) + return -ENOMEM; + + e->req =3D alloc_percpu(struct ahash_request *); + if (!e->req) + goto out_free_alg; + + hash =3D crypto_alloc_ahash(alg, 0, CRYPTO_ALG_ASYNC); + if (IS_ERR(hash)) { + ret =3D PTR_ERR(hash); + goto out_free_req; + } + + /* If hash has .setkey(), allocate ahash per-cpu, not only request */ + e->needs_key =3D crypto_ahash_get_flags(hash) & CRYPTO_TFM_NEED_KEY; + + for_each_possible_cpu(cpu) { + struct ahash_request *req; + + if (!hash) + hash =3D crypto_alloc_ahash(alg, 0, CRYPTO_ALG_ASYNC); + if (IS_ERR(hash)) + goto out_free; + + req =3D ahash_request_alloc(hash, GFP_KERNEL); + if (!req) + goto out_free; + + ahash_request_set_callback(req, 0, NULL, NULL); + + *per_cpu_ptr(e->req, cpu) =3D req; + + if (e->needs_key) + hash =3D NULL; + } + kref_init(&e->kref); + return 0; + +out_free: + if (!IS_ERR_OR_NULL(hash) && e->needs_key) + crypto_free_ahash(hash); + + for_each_possible_cpu(cpu) { + if (*per_cpu_ptr(e->req, cpu) =3D=3D NULL) + break; + hash =3D crypto_ahash_reqtfm(*per_cpu_ptr(e->req, cpu)); + ahash_request_free(*per_cpu_ptr(e->req, cpu)); + if (e->needs_key) { + crypto_free_ahash(hash); + hash =3D NULL; + } + } + + if (hash) + crypto_free_ahash(hash); +out_free_req: + free_percpu(e->req); +out_free_alg: + kfree(e->alg); + e->alg =3D NULL; + return ret; +} + +/** + * crypto_pool_alloc_ahash - allocates pool for ahash requests + * @alg: name of async hash algorithm + */ +int crypto_pool_alloc_ahash(const char *alg) +{ + int i, ret; + + /* slow-path */ + mutex_lock(&cpool_mutex); + + for (i =3D 0; i < cpool_populated; i++) { + if (cpool[i].alg && !strcmp(cpool[i].alg, alg)) { + if (kref_read(&cpool[i].kref) > 0) { + kref_get(&cpool[i].kref); + ret =3D i; + goto out; + } else { + break; + } + } + } + + for (i =3D 0; i < cpool_populated; i++) { + if (!cpool[i].alg) + break; + } + if (i >=3D CPOOL_SIZE) { + ret =3D -ENOSPC; + goto out; + } + + ret =3D __cpool_alloc_ahash(&cpool[i], alg); + if (!ret) { + ret =3D i; + if (i =3D=3D cpool_populated) + cpool_populated++; + } +out: + mutex_unlock(&cpool_mutex); + return ret; +} +EXPORT_SYMBOL_GPL(crypto_pool_alloc_ahash); + +static void __cpool_free_entry(struct crypto_pool_entry *e) +{ + struct crypto_ahash *hash =3D NULL; + int cpu; + + for_each_possible_cpu(cpu) { + if (*per_cpu_ptr(e->req, cpu) =3D=3D NULL) + continue; + + hash =3D crypto_ahash_reqtfm(*per_cpu_ptr(e->req, cpu)); + ahash_request_free(*per_cpu_ptr(e->req, cpu)); + if (e->needs_key) { + crypto_free_ahash(hash); + hash =3D NULL; + } + } + if (hash) + crypto_free_ahash(hash); + free_percpu(e->req); + kfree(e->alg); + memset(e, 0, sizeof(*e)); +} + +static void cpool_cleanup_work_cb(struct work_struct *work) +{ + unsigned int i; + bool free_scratch =3D true; + + mutex_lock(&cpool_mutex); + for (i =3D 0; i < cpool_populated; i++) { + if (kref_read(&cpool[i].kref) > 0) { + free_scratch =3D false; + continue; + } + if (!cpool[i].alg) + continue; + __cpool_free_entry(&cpool[i]); + } + if (free_scratch) + crypto_pool_scratch_free(); + mutex_unlock(&cpool_mutex); +} + +static DECLARE_WORK(cpool_cleanup_work, cpool_cleanup_work_cb); +static void cpool_schedule_cleanup(struct kref *kref) +{ + schedule_work(&cpool_cleanup_work); +} + +/** + * crypto_pool_release - decreases number of users for a pool. If it was + * the last user of the pool, releases any memory that was consumed. + * @id: crypto_pool that was previously allocated by crypto_pool_alloc_aha= sh() + */ +void crypto_pool_release(unsigned int id) +{ + if (WARN_ON_ONCE(id > cpool_populated || !cpool[id].alg)) + return; + + /* slow-path */ + kref_put(&cpool[id].kref, cpool_schedule_cleanup); +} +EXPORT_SYMBOL_GPL(crypto_pool_release); + +/** + * crypto_pool_add - increases number of users (refcounter) for a pool + * @id: crypto_pool that was previously allocated by crypto_pool_alloc_aha= sh() + */ +void crypto_pool_add(unsigned int id) +{ + if (WARN_ON_ONCE(id > cpool_populated || !cpool[id].alg)) + return; + kref_get(&cpool[id].kref); +} +EXPORT_SYMBOL_GPL(crypto_pool_add); + +/** + * crypto_pool_get - disable bh and start using crypto_pool + * @id: crypto_pool that was previously allocated by crypto_pool_alloc_aha= sh() + * @c: returned crypto_pool for usage (uninitialized on failure) + */ +int crypto_pool_get(unsigned int id, struct crypto_pool *c) +{ + struct crypto_pool_ahash *ret =3D (struct crypto_pool_ahash *)c; + + local_bh_disable(); + if (WARN_ON_ONCE(id > cpool_populated || !cpool[id].alg)) { + local_bh_enable(); + return -EINVAL; + } + ret->req =3D *this_cpu_ptr(cpool[id].req); + ret->base.scratch =3D this_cpu_read(crypto_pool_scratch); + return 0; +} +EXPORT_SYMBOL_GPL(crypto_pool_get); + +/** + * crypto_pool_algo - return algorithm of crypto_pool + * @id: crypto_pool that was previously allocated by crypto_pool_alloc_aha= sh() + * @buf: buffer to return name of algorithm + * @buf_len: size of @buf + */ +size_t crypto_pool_algo(unsigned int id, char *buf, size_t buf_len) +{ + size_t ret =3D 0; + + /* slow-path */ + mutex_lock(&cpool_mutex); + if (cpool[id].alg) + ret =3D strscpy(buf, cpool[id].alg, buf_len); + mutex_unlock(&cpool_mutex); + return ret; +} +EXPORT_SYMBOL_GPL(crypto_pool_algo); + +MODULE_LICENSE("GPL"); +MODULE_DESCRIPTION("Per-CPU pool of crypto requests"); diff --git a/include/crypto/pool.h b/include/crypto/pool.h new file mode 100644 index 000000000000..2c61aa45faff --- /dev/null +++ b/include/crypto/pool.h @@ -0,0 +1,34 @@ +/* SPDX-License-Identifier: GPL-2.0-or-later */ +#ifndef _CRYPTO_POOL_H +#define _CRYPTO_POOL_H + +#include + +#define DEFAULT_CRYPTO_POOL_SCRATCH_SZ 128 + +struct crypto_pool { + void *scratch; +}; + +/* + * struct crypto_pool_ahash - per-CPU pool of ahash_requests + * @base: common members that can be used by any async crypto ops + * @req: pre-allocated ahash request + */ +struct crypto_pool_ahash { + struct crypto_pool base; + struct ahash_request *req; +}; + +int crypto_pool_alloc_ahash(const char *alg); +void crypto_pool_add(unsigned int id); +void crypto_pool_release(unsigned int id); + +int crypto_pool_get(unsigned int id, struct crypto_pool *c); +static inline void crypto_pool_put(void) +{ + local_bh_enable(); +} +size_t crypto_pool_algo(unsigned int id, char *buf, size_t buf_len); + +#endif /* _CRYPTO_POOL_H */ --=20 2.39.0 From nobody Tue Sep 16 12:29:34 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id C59B6C54EF1 for ; Tue, 3 Jan 2023 18:46:25 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238114AbjACSqA (ORCPT ); Tue, 3 Jan 2023 13:46:00 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:49888 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238382AbjACSoy (ORCPT ); Tue, 3 Jan 2023 13:44:54 -0500 Received: from mail-wr1-x42e.google.com (mail-wr1-x42e.google.com [IPv6:2a00:1450:4864:20::42e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 0C58617069 for ; Tue, 3 Jan 2023 10:43:08 -0800 (PST) Received: by mail-wr1-x42e.google.com with SMTP id bn26so11092976wrb.0 for ; Tue, 03 Jan 2023 10:43:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=arista.com; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=2Q77snMsSA0RHjgcdnmWVPKGZuaieErtKpldxtmJJME=; b=dvBV21axKPcLoMyw1pGtEzYZsngO7g9WU5N4VAH1JYq1HIpghFY3iv8RFCgMdyc76y 5HNPlMYHZiGl56bQTsEA7o41nSUVH5px7OytGAVClmsU9vHhuLLIRogdnMYhGZu2v/dK 3er8vsGzR34/gWzjG6s2FcDAtpyz7KCSLlNus5/3ESC5xMOFxw/Iez3aiVNYrmEy88Bl phWnf6j/hh6CVFa1I0qxkNRc5xm42TGleNlc924sK2e7HZ4P0vwxkoWmMEC/rIdVDCR8 25yIMLhwnRPJ8eB1EXToQzXgi8bVs1VyI0AWBjLiG1XJJNSYBxud9HcLlTQnU2GFXD9f 0Lsw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=2Q77snMsSA0RHjgcdnmWVPKGZuaieErtKpldxtmJJME=; b=Z9piD/HWmGprzBnqx88Gp6okhoPNO6SOWkwWQs3WJk344mvN1A8MDDb2YAfgNDhu27 7T6JW9E3PiPZIeL8O/BEuXYS5yjbTwIzDG8xr79oTd9vLnhTrz2tfWkw0xQCZ2Knb+TI 5I2R9zriudif/J2gy8WKAaAHKugv2sshEnigM2XlyhoVTgFZfiNWbNcBZUF3RPqrz+hS nose4E5BTeFlingLCsdUBsTQaa7D5ab1lWGLa1xrQbIkXBiOnZOm1+lwupEZ9mm/AuIH XUj7DuzaJSxR941gfpXsHWj8fVI365vSOzZcWk+xC9SN1kouDvrwvKP9juXhhLGpse5Y uWvw== X-Gm-Message-State: AFqh2koSmV5XM0s/lMwtjcqWoVeUipEUaWj2Q5C4VlyFyFgblgm/6y8a Nklz5jwGhCEHX/lDuJMsEf+SInLT1F6igLyW X-Google-Smtp-Source: AMrXdXv7NBse0h6eN97Y2c+XcC1/MykwtmTEZXJg2ITlSd0XQmlYvpKnc1nBAbdU5DIDydeIr+6nvw== X-Received: by 2002:a5d:4911:0:b0:238:8896:788b with SMTP id x17-20020a5d4911000000b002388896788bmr28961788wrq.26.1672771387192; Tue, 03 Jan 2023 10:43:07 -0800 (PST) Received: from Mindolluin.ire.aristanetworks.com ([217.173.96.166]) by smtp.gmail.com with ESMTPSA id i18-20020a5d5232000000b0028e55b44a99sm13811578wra.17.2023.01.03.10.43.05 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 03 Jan 2023 10:43:06 -0800 (PST) From: Dmitry Safonov To: linux-kernel@vger.kernel.org, David Ahern , Eric Dumazet , Herbert Xu , Jakub Kicinski , "David S. Miller" Cc: Dmitry Safonov , Andy Lutomirski , Bob Gilligan , Dmitry Safonov <0x7f454c46@gmail.com>, Hideaki YOSHIFUJI , Leonard Crestez , Paolo Abeni , Salam Noureddine , netdev@vger.kernel.org, linux-crypto@vger.kernel.org Subject: [PATCH v2 2/5] crypto/pool: Add crypto_pool_reserve_scratch() Date: Tue, 3 Jan 2023 18:42:54 +0000 Message-Id: <20230103184257.118069-3-dima@arista.com> X-Mailer: git-send-email 2.39.0 In-Reply-To: <20230103184257.118069-1-dima@arista.com> References: <20230103184257.118069-1-dima@arista.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Instead of having build-time hardcoded constant, reallocate scratch area, if needed by user. Different algos, different users may need different size of temp per-CPU buffer. Only up-sizing supported for simplicity. Signed-off-by: Dmitry Safonov --- crypto/Kconfig | 6 ++++ crypto/crypto_pool.c | 77 ++++++++++++++++++++++++++++++++++--------- include/crypto/pool.h | 3 +- 3 files changed, 69 insertions(+), 17 deletions(-) diff --git a/crypto/Kconfig b/crypto/Kconfig index ba8d4a1f10f9..0614c2acfffa 100644 --- a/crypto/Kconfig +++ b/crypto/Kconfig @@ -1394,6 +1394,12 @@ config CRYPTO_POOL help Per-CPU pool of crypto requests ready for usage in atomic contexts. =20 +config CRYPTO_POOL_DEFAULT_SCRATCH_SIZE + hex "Per-CPU default scratch area size" + depends on CRYPTO_POOL + default 0x100 + range 0x100 0x10000 + if !KMSAN # avoid false positives from assembly if ARM source "arch/arm/crypto/Kconfig" diff --git a/crypto/crypto_pool.c b/crypto/crypto_pool.c index 37131952c5a7..0cd9eade7b73 100644 --- a/crypto/crypto_pool.c +++ b/crypto/crypto_pool.c @@ -1,13 +1,14 @@ // SPDX-License-Identifier: GPL-2.0-or-later =20 #include +#include #include #include #include #include #include =20 -static unsigned long scratch_size =3D DEFAULT_CRYPTO_POOL_SCRATCH_SZ; +static unsigned long scratch_size =3D CONFIG_CRYPTO_POOL_DEFAULT_SCRATCH_S= IZE; static DEFINE_PER_CPU(void *, crypto_pool_scratch); =20 struct crypto_pool_entry { @@ -22,26 +23,69 @@ static struct crypto_pool_entry cpool[CPOOL_SIZE]; static unsigned int cpool_populated; static DEFINE_MUTEX(cpool_mutex); =20 -static int crypto_pool_scratch_alloc(void) +/* Slow-path */ +/** + * crypto_pool_reserve_scratch - re-allocates scratch buffer, slow-path + * @size: request size for the scratch/temp buffer + */ +int crypto_pool_reserve_scratch(unsigned long size) { - int cpu; - - lockdep_assert_held(&cpool_mutex); +#define FREE_BATCH_SIZE 64 + void *free_batch[FREE_BATCH_SIZE]; + int cpu, err =3D 0; + unsigned int i =3D 0; =20 + mutex_lock(&cpool_mutex); + if (size =3D=3D scratch_size) { + for_each_possible_cpu(cpu) { + if (per_cpu(crypto_pool_scratch, cpu)) + continue; + goto allocate_scratch; + } + mutex_unlock(&cpool_mutex); + return 0; + } +allocate_scratch: + size =3D max(size, scratch_size); + cpus_read_lock(); for_each_possible_cpu(cpu) { - void *scratch =3D per_cpu(crypto_pool_scratch, cpu); + void *scratch, *old_scratch; =20 - if (scratch) + scratch =3D kmalloc_node(size, GFP_KERNEL, cpu_to_node(cpu)); + if (!scratch) { + err =3D -ENOMEM; + break; + } + + old_scratch =3D per_cpu(crypto_pool_scratch, cpu); + /* Pairs with crypto_pool_get() */ + WRITE_ONCE(*per_cpu_ptr(&crypto_pool_scratch, cpu), scratch); + if (!cpu_online(cpu)) { + kfree(old_scratch); continue; + } + free_batch[i++] =3D old_scratch; + if (i =3D=3D FREE_BATCH_SIZE) { + cpus_read_unlock(); + synchronize_rcu(); + while (i > 0) + kfree(free_batch[--i]); + cpus_read_lock(); + } + } + cpus_read_unlock(); + if (!err) + scratch_size =3D size; + mutex_unlock(&cpool_mutex); =20 - scratch =3D kmalloc_node(scratch_size, GFP_KERNEL, - cpu_to_node(cpu)); - if (!scratch) - return -ENOMEM; - per_cpu(crypto_pool_scratch, cpu) =3D scratch; + if (i > 0) { + synchronize_rcu(); + while (i > 0) + kfree(free_batch[--i]); } - return 0; + return err; } +EXPORT_SYMBOL_GPL(crypto_pool_reserve_scratch); =20 static void crypto_pool_scratch_free(void) { @@ -138,7 +182,6 @@ int crypto_pool_alloc_ahash(const char *alg) =20 /* slow-path */ mutex_lock(&cpool_mutex); - for (i =3D 0; i < cpool_populated; i++) { if (cpool[i].alg && !strcmp(cpool[i].alg, alg)) { if (kref_read(&cpool[i].kref) > 0) { @@ -263,7 +306,11 @@ int crypto_pool_get(unsigned int id, struct crypto_poo= l *c) return -EINVAL; } ret->req =3D *this_cpu_ptr(cpool[id].req); - ret->base.scratch =3D this_cpu_read(crypto_pool_scratch); + /* + * Pairs with crypto_pool_reserve_scratch(), scartch area is + * valid (allocated) until crypto_pool_put(). + */ + ret->base.scratch =3D READ_ONCE(*this_cpu_ptr(&crypto_pool_scratch)); return 0; } EXPORT_SYMBOL_GPL(crypto_pool_get); diff --git a/include/crypto/pool.h b/include/crypto/pool.h index 2c61aa45faff..c7d817860cc3 100644 --- a/include/crypto/pool.h +++ b/include/crypto/pool.h @@ -4,8 +4,6 @@ =20 #include =20 -#define DEFAULT_CRYPTO_POOL_SCRATCH_SZ 128 - struct crypto_pool { void *scratch; }; @@ -20,6 +18,7 @@ struct crypto_pool_ahash { struct ahash_request *req; }; =20 +int crypto_pool_reserve_scratch(unsigned long size); int crypto_pool_alloc_ahash(const char *alg); void crypto_pool_add(unsigned int id); void crypto_pool_release(unsigned int id); --=20 2.39.0 From nobody Tue Sep 16 12:29:34 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 10CD6C6379F for ; Tue, 3 Jan 2023 18:46:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238348AbjACSqE (ORCPT ); Tue, 3 Jan 2023 13:46:04 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50004 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238031AbjACSo6 (ORCPT ); Tue, 3 Jan 2023 13:44:58 -0500 Received: from mail-wm1-x333.google.com (mail-wm1-x333.google.com [IPv6:2a00:1450:4864:20::333]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 46A471573F for ; Tue, 3 Jan 2023 10:43:10 -0800 (PST) Received: by mail-wm1-x333.google.com with SMTP id bg13-20020a05600c3c8d00b003d9712b29d2so21613599wmb.2 for ; Tue, 03 Jan 2023 10:43:10 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=arista.com; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=ZGB0nXL5lDSPjHJnL5DlSQ8h9YUkZb4l2ZPDpaU7QxQ=; b=RdRMwEYxzD775m3arepB5LyRu0Ig1LHh/xcSxHPwrhelwpmtNKjgngaawdGmCeKJ4c GX+5cu2KWYgVpGFN6XCLdrcx+R0tfWKntrcFkXe/XR+WVIgsaL4CzpumbyOpWOWzbSHG jwjG8BYba3zLFHVHhCaslcYvC1ZIopP6UQbxM31WcUICjhIR2EQGrnuG8F/SFaqEb4Mc I7AnnKwQVgQlk5gNpjRiGt10mtf+l/oKm47mWGttc/Cqv5bSiV+1sAnZkyVluo8GyE5g urVVSsQk2Uw5Qm6pw44S4w2j+gUzTeWUsdbcKgyqRZjfT5toVoZTJcvSWxqa4Psx7Rkt 95jw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=ZGB0nXL5lDSPjHJnL5DlSQ8h9YUkZb4l2ZPDpaU7QxQ=; b=F/U6wZ6+X13265lFuEz9hI0yldZysJqP3ONMuqbjSbuOohSYlDBXBMzSjC8fpYqpS7 mgrUmzm2YutZ97uMRLq1eLXzltxpwDFKp0+M74pugA+B6nn5rHUQq7r0Da7R2BPtnXPd HXijVXUfuu9voblJUJl4Pg/Z5RpmtyoJ73wns0NF8mu0ftKyxA7V+P+obYcHwZSE+UXF TLc2L6qZucnyL9hLymoediJk4hy8aPRRcmIC/kB/Glt1kpHI5JiE65uZtxQxNHbowBEo JbO3rglEoDLbXhecLfErxYmMfqSYwwJIL3lbRzXWsLO8qvihg9LDCugBLyJVnH3BLNPV /zNw== X-Gm-Message-State: AFqh2ko2+rH+P7Yi5Jusw1syTAApJb02TvvD89tzXvPpnk0bDEf0SiRF ZkdGcM3Ky1/YrLW6rz8tyO38C3382Xa0hzoQ X-Google-Smtp-Source: AMrXdXtf3rZD3poviDBHKZAwvUU6jzH+H6fClQUoPkKRYNo4Ui2CQqav2Dv2Lk5364eS9fGHfm9u6g== X-Received: by 2002:a05:600c:26d1:b0:3d1:e907:17c1 with SMTP id 17-20020a05600c26d100b003d1e90717c1mr32083681wmv.38.1672771388426; Tue, 03 Jan 2023 10:43:08 -0800 (PST) Received: from Mindolluin.ire.aristanetworks.com ([217.173.96.166]) by smtp.gmail.com with ESMTPSA id i18-20020a5d5232000000b0028e55b44a99sm13811578wra.17.2023.01.03.10.43.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 03 Jan 2023 10:43:08 -0800 (PST) From: Dmitry Safonov To: linux-kernel@vger.kernel.org, David Ahern , Eric Dumazet , Herbert Xu , Jakub Kicinski , "David S. Miller" Cc: Dmitry Safonov , Andy Lutomirski , Bob Gilligan , Dmitry Safonov <0x7f454c46@gmail.com>, Hideaki YOSHIFUJI , Leonard Crestez , Paolo Abeni , Salam Noureddine , netdev@vger.kernel.org, linux-crypto@vger.kernel.org Subject: [PATCH v2 3/5] crypto/net/tcp: Use crypto_pool for TCP-MD5 Date: Tue, 3 Jan 2023 18:42:55 +0000 Message-Id: <20230103184257.118069-4-dima@arista.com> X-Mailer: git-send-email 2.39.0 In-Reply-To: <20230103184257.118069-1-dima@arista.com> References: <20230103184257.118069-1-dima@arista.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Use crypto_pool API that was designed with tcp_md5sig_pool in mind. The conversion to use crypto_pool will allow: - to reuse ahash_request(s) for different users - to allocate only one per-CPU scratch buffer rather than a new one for each user - to have a common API for net/ users that need ahash on RX/TX fast path Signed-off-by: Dmitry Safonov --- include/net/tcp.h | 24 +++------ net/ipv4/Kconfig | 2 +- net/ipv4/tcp.c | 105 ++++++++++----------------------------- net/ipv4/tcp_ipv4.c | 92 ++++++++++++++++++++-------------- net/ipv4/tcp_minisocks.c | 21 +++++--- net/ipv6/tcp_ipv6.c | 53 +++++++++----------- 6 files changed, 129 insertions(+), 168 deletions(-) diff --git a/include/net/tcp.h b/include/net/tcp.h index db9f828e9d1e..048057cb4c2e 100644 --- a/include/net/tcp.h +++ b/include/net/tcp.h @@ -1664,12 +1664,6 @@ union tcp_md5sum_block { #endif }; =20 -/* - pool: digest algorithm, hash description and scratch buffer */ -struct tcp_md5sig_pool { - struct ahash_request *md5_req; - void *scratch; -}; - /* - functions */ int tcp_v4_md5_hash_skb(char *md5_hash, const struct tcp_md5sig_key *key, const struct sock *sk, const struct sk_buff *skb); @@ -1725,17 +1719,15 @@ tcp_inbound_md5_hash(const struct sock *sk, const s= truct sk_buff *skb, #define tcp_twsk_md5_key(twsk) NULL #endif =20 -bool tcp_alloc_md5sig_pool(void); - -struct tcp_md5sig_pool *tcp_get_md5sig_pool(void); -static inline void tcp_put_md5sig_pool(void) -{ - local_bh_enable(); -} +struct crypto_pool_ahash; +int tcp_md5_alloc_crypto_pool(void); +void tcp_md5_release_crypto_pool(void); +void tcp_md5_add_crypto_pool(void); +extern int tcp_md5_crypto_pool_id; =20 -int tcp_md5_hash_skb_data(struct tcp_md5sig_pool *, const struct sk_buff *, - unsigned int header_len); -int tcp_md5_hash_key(struct tcp_md5sig_pool *hp, +int tcp_md5_hash_skb_data(struct crypto_pool_ahash *hp, + const struct sk_buff *skb, unsigned int header_len); +int tcp_md5_hash_key(struct crypto_pool_ahash *hp, const struct tcp_md5sig_key *key); =20 /* From tcp_fastopen.c */ diff --git a/net/ipv4/Kconfig b/net/ipv4/Kconfig index 2dfb12230f08..46e8bdb749df 100644 --- a/net/ipv4/Kconfig +++ b/net/ipv4/Kconfig @@ -743,7 +743,7 @@ config DEFAULT_TCP_CONG =20 config TCP_MD5SIG bool "TCP: MD5 Signature Option support (RFC2385)" - select CRYPTO + select CRYPTO_POOL select CRYPTO_MD5 help RFC2385 specifies a method of giving MD5 protection to TCP sessions. diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c index c567d5e8053e..b06a21949cf0 100644 --- a/net/ipv4/tcp.c +++ b/net/ipv4/tcp.c @@ -244,6 +244,7 @@ #define pr_fmt(fmt) "TCP: " fmt =20 #include +#include #include #include #include @@ -4411,98 +4412,45 @@ int tcp_getsockopt(struct sock *sk, int level, int = optname, char __user *optval, EXPORT_SYMBOL(tcp_getsockopt); =20 #ifdef CONFIG_TCP_MD5SIG -static DEFINE_PER_CPU(struct tcp_md5sig_pool, tcp_md5sig_pool); -static DEFINE_MUTEX(tcp_md5sig_mutex); -static bool tcp_md5sig_pool_populated =3D false; +int tcp_md5_crypto_pool_id =3D -1; +EXPORT_SYMBOL(tcp_md5_crypto_pool_id); =20 -static void __tcp_alloc_md5sig_pool(void) +int tcp_md5_alloc_crypto_pool(void) { - struct crypto_ahash *hash; - int cpu; - - hash =3D crypto_alloc_ahash("md5", 0, CRYPTO_ALG_ASYNC); - if (IS_ERR(hash)) - return; - - for_each_possible_cpu(cpu) { - void *scratch =3D per_cpu(tcp_md5sig_pool, cpu).scratch; - struct ahash_request *req; - - if (!scratch) { - scratch =3D kmalloc_node(sizeof(union tcp_md5sum_block) + - sizeof(struct tcphdr), - GFP_KERNEL, - cpu_to_node(cpu)); - if (!scratch) - return; - per_cpu(tcp_md5sig_pool, cpu).scratch =3D scratch; - } - if (per_cpu(tcp_md5sig_pool, cpu).md5_req) - continue; - - req =3D ahash_request_alloc(hash, GFP_KERNEL); - if (!req) - return; + int ret; =20 - ahash_request_set_callback(req, 0, NULL, NULL); + ret =3D crypto_pool_reserve_scratch(sizeof(union tcp_md5sum_block) + + sizeof(struct tcphdr)); + if (ret) + return ret; =20 - per_cpu(tcp_md5sig_pool, cpu).md5_req =3D req; + ret =3D crypto_pool_alloc_ahash("md5"); + if (ret >=3D 0) { + tcp_md5_crypto_pool_id =3D ret; + return 0; } - /* before setting tcp_md5sig_pool_populated, we must commit all writes - * to memory. See smp_rmb() in tcp_get_md5sig_pool() - */ - smp_wmb(); - /* Paired with READ_ONCE() from tcp_alloc_md5sig_pool() - * and tcp_get_md5sig_pool(). - */ - WRITE_ONCE(tcp_md5sig_pool_populated, true); + return ret; } +EXPORT_SYMBOL(tcp_md5_alloc_crypto_pool); =20 -bool tcp_alloc_md5sig_pool(void) +void tcp_md5_release_crypto_pool(void) { - /* Paired with WRITE_ONCE() from __tcp_alloc_md5sig_pool() */ - if (unlikely(!READ_ONCE(tcp_md5sig_pool_populated))) { - mutex_lock(&tcp_md5sig_mutex); - - if (!tcp_md5sig_pool_populated) - __tcp_alloc_md5sig_pool(); - - mutex_unlock(&tcp_md5sig_mutex); - } - /* Paired with WRITE_ONCE() from __tcp_alloc_md5sig_pool() */ - return READ_ONCE(tcp_md5sig_pool_populated); + crypto_pool_release(tcp_md5_crypto_pool_id); } -EXPORT_SYMBOL(tcp_alloc_md5sig_pool); +EXPORT_SYMBOL(tcp_md5_release_crypto_pool); =20 - -/** - * tcp_get_md5sig_pool - get md5sig_pool for this user - * - * We use percpu structure, so if we succeed, we exit with preemption - * and BH disabled, to make sure another thread or softirq handling - * wont try to get same context. - */ -struct tcp_md5sig_pool *tcp_get_md5sig_pool(void) +void tcp_md5_add_crypto_pool(void) { - local_bh_disable(); - - /* Paired with WRITE_ONCE() from __tcp_alloc_md5sig_pool() */ - if (READ_ONCE(tcp_md5sig_pool_populated)) { - /* coupled with smp_wmb() in __tcp_alloc_md5sig_pool() */ - smp_rmb(); - return this_cpu_ptr(&tcp_md5sig_pool); - } - local_bh_enable(); - return NULL; + crypto_pool_add(tcp_md5_crypto_pool_id); } -EXPORT_SYMBOL(tcp_get_md5sig_pool); +EXPORT_SYMBOL(tcp_md5_add_crypto_pool); =20 -int tcp_md5_hash_skb_data(struct tcp_md5sig_pool *hp, +int tcp_md5_hash_skb_data(struct crypto_pool_ahash *hp, const struct sk_buff *skb, unsigned int header_len) { struct scatterlist sg; const struct tcphdr *tp =3D tcp_hdr(skb); - struct ahash_request *req =3D hp->md5_req; + struct ahash_request *req =3D hp->req; unsigned int i; const unsigned int head_data_len =3D skb_headlen(skb) > header_len ? skb_headlen(skb) - header_len : 0; @@ -4536,16 +4484,17 @@ int tcp_md5_hash_skb_data(struct tcp_md5sig_pool *h= p, } EXPORT_SYMBOL(tcp_md5_hash_skb_data); =20 -int tcp_md5_hash_key(struct tcp_md5sig_pool *hp, const struct tcp_md5sig_k= ey *key) +int tcp_md5_hash_key(struct crypto_pool_ahash *hp, + const struct tcp_md5sig_key *key) { u8 keylen =3D READ_ONCE(key->keylen); /* paired with WRITE_ONCE() in tcp_= md5_do_add */ struct scatterlist sg; =20 sg_init_one(&sg, key->key, keylen); - ahash_request_set_crypt(hp->md5_req, &sg, NULL, keylen); + ahash_request_set_crypt(hp->req, &sg, NULL, keylen); =20 /* We use data_race() because tcp_md5_do_add() might change key->key unde= r us */ - return data_race(crypto_ahash_update(hp->md5_req)); + return data_race(crypto_ahash_update(hp->req)); } EXPORT_SYMBOL(tcp_md5_hash_key); =20 diff --git a/net/ipv4/tcp_ipv4.c b/net/ipv4/tcp_ipv4.c index 8320d0ecb13a..93273b9a2948 100644 --- a/net/ipv4/tcp_ipv4.c +++ b/net/ipv4/tcp_ipv4.c @@ -79,6 +79,7 @@ #include =20 #include +#include #include =20 #include @@ -1212,10 +1213,6 @@ static int __tcp_md5_do_add(struct sock *sk, const u= nion tcp_md5_addr *addr, key =3D sock_kmalloc(sk, sizeof(*key), gfp | __GFP_ZERO); if (!key) return -ENOMEM; - if (!tcp_alloc_md5sig_pool()) { - sock_kfree_s(sk, key, sizeof(*key)); - return -ENOMEM; - } =20 memcpy(key->key, newkey, newkeylen); key->keylen =3D newkeylen; @@ -1237,8 +1234,13 @@ int tcp_md5_do_add(struct sock *sk, const union tcp_= md5_addr *addr, struct tcp_sock *tp =3D tcp_sk(sk); =20 if (!rcu_dereference_protected(tp->md5sig_info, lockdep_sock_is_held(sk))= ) { - if (tcp_md5sig_info_add(sk, GFP_KERNEL)) + if (tcp_md5_alloc_crypto_pool()) + return -ENOMEM; + + if (tcp_md5sig_info_add(sk, GFP_KERNEL)) { + tcp_md5_release_crypto_pool(); return -ENOMEM; + } =20 if (!static_branch_inc(&tcp_md5_needed.key)) { struct tcp_md5sig_info *md5sig; @@ -1246,6 +1248,7 @@ int tcp_md5_do_add(struct sock *sk, const union tcp_m= d5_addr *addr, md5sig =3D rcu_dereference_protected(tp->md5sig_info, lockdep_sock_is_h= eld(sk)); rcu_assign_pointer(tp->md5sig_info, NULL); kfree_rcu(md5sig, rcu); + tcp_md5_release_crypto_pool(); return -EUSERS; } } @@ -1262,8 +1265,12 @@ int tcp_md5_key_copy(struct sock *sk, const union tc= p_md5_addr *addr, struct tcp_sock *tp =3D tcp_sk(sk); =20 if (!rcu_dereference_protected(tp->md5sig_info, lockdep_sock_is_held(sk))= ) { - if (tcp_md5sig_info_add(sk, sk_gfp_mask(sk, GFP_ATOMIC))) + tcp_md5_add_crypto_pool(); + + if (tcp_md5sig_info_add(sk, sk_gfp_mask(sk, GFP_ATOMIC))) { + tcp_md5_release_crypto_pool(); return -ENOMEM; + } =20 if (!static_key_fast_inc_not_disabled(&tcp_md5_needed.key.key)) { struct tcp_md5sig_info *md5sig; @@ -1272,6 +1279,7 @@ int tcp_md5_key_copy(struct sock *sk, const union tcp= _md5_addr *addr, net_warn_ratelimited("Too many TCP-MD5 keys in the system\n"); rcu_assign_pointer(tp->md5sig_info, NULL); kfree_rcu(md5sig, rcu); + tcp_md5_release_crypto_pool(); return -EUSERS; } } @@ -1371,7 +1379,7 @@ static int tcp_v4_parse_md5_keys(struct sock *sk, int= optname, cmd.tcpm_key, cmd.tcpm_keylen); } =20 -static int tcp_v4_md5_hash_headers(struct tcp_md5sig_pool *hp, +static int tcp_v4_md5_hash_headers(struct crypto_pool_ahash *hp, __be32 daddr, __be32 saddr, const struct tcphdr *th, int nbytes) { @@ -1379,7 +1387,7 @@ static int tcp_v4_md5_hash_headers(struct tcp_md5sig_= pool *hp, struct scatterlist sg; struct tcphdr *_th; =20 - bp =3D hp->scratch; + bp =3D hp->base.scratch; bp->saddr =3D saddr; bp->daddr =3D daddr; bp->pad =3D 0; @@ -1391,37 +1399,34 @@ static int tcp_v4_md5_hash_headers(struct tcp_md5si= g_pool *hp, _th->check =3D 0; =20 sg_init_one(&sg, bp, sizeof(*bp) + sizeof(*th)); - ahash_request_set_crypt(hp->md5_req, &sg, NULL, + ahash_request_set_crypt(hp->req, &sg, NULL, sizeof(*bp) + sizeof(*th)); - return crypto_ahash_update(hp->md5_req); + return crypto_ahash_update(hp->req); } =20 static int tcp_v4_md5_hash_hdr(char *md5_hash, const struct tcp_md5sig_key= *key, __be32 daddr, __be32 saddr, const struct tcphdr *th) { - struct tcp_md5sig_pool *hp; - struct ahash_request *req; + struct crypto_pool_ahash hp; =20 - hp =3D tcp_get_md5sig_pool(); - if (!hp) + if (crypto_pool_get(tcp_md5_crypto_pool_id, (struct crypto_pool *)&hp)) goto clear_hash_noput; - req =3D hp->md5_req; =20 - if (crypto_ahash_init(req)) + if (crypto_ahash_init(hp.req)) goto clear_hash; - if (tcp_v4_md5_hash_headers(hp, daddr, saddr, th, th->doff << 2)) + if (tcp_v4_md5_hash_headers(&hp, daddr, saddr, th, th->doff << 2)) goto clear_hash; - if (tcp_md5_hash_key(hp, key)) + if (tcp_md5_hash_key(&hp, key)) goto clear_hash; - ahash_request_set_crypt(req, NULL, md5_hash, 0); - if (crypto_ahash_final(req)) + ahash_request_set_crypt(hp.req, NULL, md5_hash, 0); + if (crypto_ahash_final(hp.req)) goto clear_hash; =20 - tcp_put_md5sig_pool(); + crypto_pool_put(); return 0; =20 clear_hash: - tcp_put_md5sig_pool(); + crypto_pool_put(); clear_hash_noput: memset(md5_hash, 0, 16); return 1; @@ -1431,8 +1436,7 @@ int tcp_v4_md5_hash_skb(char *md5_hash, const struct = tcp_md5sig_key *key, const struct sock *sk, const struct sk_buff *skb) { - struct tcp_md5sig_pool *hp; - struct ahash_request *req; + struct crypto_pool_ahash hp; const struct tcphdr *th =3D tcp_hdr(skb); __be32 saddr, daddr; =20 @@ -1445,29 +1449,27 @@ int tcp_v4_md5_hash_skb(char *md5_hash, const struc= t tcp_md5sig_key *key, daddr =3D iph->daddr; } =20 - hp =3D tcp_get_md5sig_pool(); - if (!hp) + if (crypto_pool_get(tcp_md5_crypto_pool_id, (struct crypto_pool *)&hp)) goto clear_hash_noput; - req =3D hp->md5_req; =20 - if (crypto_ahash_init(req)) + if (crypto_ahash_init(hp.req)) goto clear_hash; =20 - if (tcp_v4_md5_hash_headers(hp, daddr, saddr, th, skb->len)) + if (tcp_v4_md5_hash_headers(&hp, daddr, saddr, th, skb->len)) goto clear_hash; - if (tcp_md5_hash_skb_data(hp, skb, th->doff << 2)) + if (tcp_md5_hash_skb_data(&hp, skb, th->doff << 2)) goto clear_hash; - if (tcp_md5_hash_key(hp, key)) + if (tcp_md5_hash_key(&hp, key)) goto clear_hash; - ahash_request_set_crypt(req, NULL, md5_hash, 0); - if (crypto_ahash_final(req)) + ahash_request_set_crypt(hp.req, NULL, md5_hash, 0); + if (crypto_ahash_final(hp.req)) goto clear_hash; =20 - tcp_put_md5sig_pool(); + crypto_pool_put(); return 0; =20 clear_hash: - tcp_put_md5sig_pool(); + crypto_pool_put(); clear_hash_noput: memset(md5_hash, 0, 16); return 1; @@ -2285,6 +2287,18 @@ static int tcp_v4_init_sock(struct sock *sk) return 0; } =20 +#ifdef CONFIG_TCP_MD5SIG +static void tcp_md5sig_info_free_rcu(struct rcu_head *head) +{ + struct tcp_md5sig_info *md5sig; + + md5sig =3D container_of(head, struct tcp_md5sig_info, rcu); + kfree(md5sig); + static_branch_slow_dec_deferred(&tcp_md5_needed); + tcp_md5_release_crypto_pool(); +} +#endif + void tcp_v4_destroy_sock(struct sock *sk) { struct tcp_sock *tp =3D tcp_sk(sk); @@ -2309,10 +2323,12 @@ void tcp_v4_destroy_sock(struct sock *sk) #ifdef CONFIG_TCP_MD5SIG /* Clean up the MD5 key list, if any */ if (tp->md5sig_info) { + struct tcp_md5sig_info *md5sig; + + md5sig =3D rcu_dereference_protected(tp->md5sig_info, 1); tcp_clear_md5_list(sk); - kfree_rcu(rcu_dereference_protected(tp->md5sig_info, 1), rcu); - tp->md5sig_info =3D NULL; - static_branch_slow_dec_deferred(&tcp_md5_needed); + call_rcu(&md5sig->rcu, tcp_md5sig_info_free_rcu); + rcu_assign_pointer(tp->md5sig_info, NULL); } #endif =20 diff --git a/net/ipv4/tcp_minisocks.c b/net/ipv4/tcp_minisocks.c index e002f2e1d4f2..b2ffda09f3b4 100644 --- a/net/ipv4/tcp_minisocks.c +++ b/net/ipv4/tcp_minisocks.c @@ -261,11 +261,10 @@ static void tcp_time_wait_init(struct sock *sk, struc= t tcp_timewait_sock *tcptw) tcptw->tw_md5_key =3D kmemdup(key, sizeof(*key), GFP_ATOMIC); if (!tcptw->tw_md5_key) return; - if (!tcp_alloc_md5sig_pool()) - goto out_free; if (!static_key_fast_inc_not_disabled(&tcp_md5_needed.key.key)) goto out_free; } + tcp_md5_add_crypto_pool(); return; out_free: WARN_ON_ONCE(1); @@ -349,16 +348,26 @@ void tcp_time_wait(struct sock *sk, int state, int ti= meo) } EXPORT_SYMBOL(tcp_time_wait); =20 +#ifdef CONFIG_TCP_MD5SIG +static void tcp_md5_twsk_free_rcu(struct rcu_head *head) +{ + struct tcp_md5sig_key *key; + + key =3D container_of(head, struct tcp_md5sig_key, rcu); + kfree(key); + static_branch_slow_dec_deferred(&tcp_md5_needed); + tcp_md5_release_crypto_pool(); +} +#endif + void tcp_twsk_destructor(struct sock *sk) { #ifdef CONFIG_TCP_MD5SIG if (static_branch_unlikely(&tcp_md5_needed.key)) { struct tcp_timewait_sock *twsk =3D tcp_twsk(sk); =20 - if (twsk->tw_md5_key) { - kfree_rcu(twsk->tw_md5_key, rcu); - static_branch_slow_dec_deferred(&tcp_md5_needed); - } + if (twsk->tw_md5_key) + call_rcu(&twsk->tw_md5_key->rcu, tcp_md5_twsk_free_rcu); } #endif } diff --git a/net/ipv6/tcp_ipv6.c b/net/ipv6/tcp_ipv6.c index 11b736a76bd7..1a94b5f6d152 100644 --- a/net/ipv6/tcp_ipv6.c +++ b/net/ipv6/tcp_ipv6.c @@ -64,6 +64,7 @@ #include =20 #include +#include #include =20 #include @@ -672,7 +673,7 @@ static int tcp_v6_parse_md5_keys(struct sock *sk, int o= ptname, cmd.tcpm_key, cmd.tcpm_keylen); } =20 -static int tcp_v6_md5_hash_headers(struct tcp_md5sig_pool *hp, +static int tcp_v6_md5_hash_headers(struct crypto_pool_ahash *hp, const struct in6_addr *daddr, const struct in6_addr *saddr, const struct tcphdr *th, int nbytes) @@ -681,7 +682,7 @@ static int tcp_v6_md5_hash_headers(struct tcp_md5sig_po= ol *hp, struct scatterlist sg; struct tcphdr *_th; =20 - bp =3D hp->scratch; + bp =3D hp->base.scratch; /* 1. TCP pseudo-header (RFC2460) */ bp->saddr =3D *saddr; bp->daddr =3D *daddr; @@ -693,38 +694,35 @@ static int tcp_v6_md5_hash_headers(struct tcp_md5sig_= pool *hp, _th->check =3D 0; =20 sg_init_one(&sg, bp, sizeof(*bp) + sizeof(*th)); - ahash_request_set_crypt(hp->md5_req, &sg, NULL, + ahash_request_set_crypt(hp->req, &sg, NULL, sizeof(*bp) + sizeof(*th)); - return crypto_ahash_update(hp->md5_req); + return crypto_ahash_update(hp->req); } =20 static int tcp_v6_md5_hash_hdr(char *md5_hash, const struct tcp_md5sig_key= *key, const struct in6_addr *daddr, struct in6_addr *saddr, const struct tcphdr *th) { - struct tcp_md5sig_pool *hp; - struct ahash_request *req; + struct crypto_pool_ahash hp; =20 - hp =3D tcp_get_md5sig_pool(); - if (!hp) + if (crypto_pool_get(tcp_md5_crypto_pool_id, (struct crypto_pool *)&hp)) goto clear_hash_noput; - req =3D hp->md5_req; =20 - if (crypto_ahash_init(req)) + if (crypto_ahash_init(hp.req)) goto clear_hash; - if (tcp_v6_md5_hash_headers(hp, daddr, saddr, th, th->doff << 2)) + if (tcp_v6_md5_hash_headers(&hp, daddr, saddr, th, th->doff << 2)) goto clear_hash; - if (tcp_md5_hash_key(hp, key)) + if (tcp_md5_hash_key(&hp, key)) goto clear_hash; - ahash_request_set_crypt(req, NULL, md5_hash, 0); - if (crypto_ahash_final(req)) + ahash_request_set_crypt(hp.req, NULL, md5_hash, 0); + if (crypto_ahash_final(hp.req)) goto clear_hash; =20 - tcp_put_md5sig_pool(); + crypto_pool_put(); return 0; =20 clear_hash: - tcp_put_md5sig_pool(); + crypto_pool_put(); clear_hash_noput: memset(md5_hash, 0, 16); return 1; @@ -736,8 +734,7 @@ static int tcp_v6_md5_hash_skb(char *md5_hash, const struct sk_buff *skb) { const struct in6_addr *saddr, *daddr; - struct tcp_md5sig_pool *hp; - struct ahash_request *req; + struct crypto_pool_ahash hp; const struct tcphdr *th =3D tcp_hdr(skb); =20 if (sk) { /* valid for establish/request sockets */ @@ -749,29 +746,27 @@ static int tcp_v6_md5_hash_skb(char *md5_hash, daddr =3D &ip6h->daddr; } =20 - hp =3D tcp_get_md5sig_pool(); - if (!hp) + if (crypto_pool_get(tcp_md5_crypto_pool_id, (struct crypto_pool *)&hp)) goto clear_hash_noput; - req =3D hp->md5_req; =20 - if (crypto_ahash_init(req)) + if (crypto_ahash_init(hp.req)) goto clear_hash; =20 - if (tcp_v6_md5_hash_headers(hp, daddr, saddr, th, skb->len)) + if (tcp_v6_md5_hash_headers(&hp, daddr, saddr, th, skb->len)) goto clear_hash; - if (tcp_md5_hash_skb_data(hp, skb, th->doff << 2)) + if (tcp_md5_hash_skb_data(&hp, skb, th->doff << 2)) goto clear_hash; - if (tcp_md5_hash_key(hp, key)) + if (tcp_md5_hash_key(&hp, key)) goto clear_hash; - ahash_request_set_crypt(req, NULL, md5_hash, 0); - if (crypto_ahash_final(req)) + ahash_request_set_crypt(hp.req, NULL, md5_hash, 0); + if (crypto_ahash_final(hp.req)) goto clear_hash; =20 - tcp_put_md5sig_pool(); + crypto_pool_put(); return 0; =20 clear_hash: - tcp_put_md5sig_pool(); + crypto_pool_put(); clear_hash_noput: memset(md5_hash, 0, 16); return 1; --=20 2.39.0 From nobody Tue Sep 16 12:29:34 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2AD64C63797 for ; Tue, 3 Jan 2023 18:46:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238512AbjACSqM (ORCPT ); Tue, 3 Jan 2023 13:46:12 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50070 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S238402AbjACSpB (ORCPT ); Tue, 3 Jan 2023 13:45:01 -0500 Received: from mail-wm1-x334.google.com (mail-wm1-x334.google.com [IPv6:2a00:1450:4864:20::334]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 4C0CC17401 for ; Tue, 3 Jan 2023 10:43:11 -0800 (PST) Received: by mail-wm1-x334.google.com with SMTP id g25-20020a7bc4d9000000b003d97c8d4941so18581095wmk.4 for ; Tue, 03 Jan 2023 10:43:11 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=arista.com; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=qq6zo5G5wGoRxe187RE2w0C2j5Mik32afh8fD5f/eMA=; b=VXackr9pNKYDV+ViZR9LTX4VDXsRtuqBHlcn6pLvTK2CjxyBucFSeU5kO8xvL6g0nv 6/LT+EH0KbOsbYfC5YDeTSkKa92Tkxm+mlH7ZDGkay3Kx3T+m1yoDpRSDNON9ItPYVfR oVcrC16Ce61g3HfVmykr4SSncCNkjPnfxJWEzpBk9aayyrBnwSFOIPmxYBct8KqorIHC Pcqxvmi5aCJBlCfaPYfbemSz62OVGsKN1R9qtYoEdSOIdT05/OBUGfeBThVfPVKn724g t0MrUEZIOQhXMQjMJrxrNQ8VOZhf4GZc7fM0ASOFni97oCAUCKJm3pZq8jVElMkpB13o AXnA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=qq6zo5G5wGoRxe187RE2w0C2j5Mik32afh8fD5f/eMA=; b=USvofUXYzbGnVxKaRrleJNu98/8d0T0Fs5pgma1uZS2IYyTVXcYPeEcLk2BI0XPKK0 EfCT06m9jNPGuvSjPfKUSmszuAa/LTAdQfoyzFC8qDHKQ2Vy41efShwXFDisZjfY+RkT yxMyAAO1DA7hNz/hcW3AdNhEf4dsdj0KM8cdAO5bJ0uz9BKufaJgpo9otvT1EJEdKHp+ i10ULzVOkIlnR0ilOW0M/ZzoUW9OU14c8yav9qM0lzl+X4Eoe9uDiF6xzYkPtmG+RIKN 9t/j2CJF3o8Z63MNwEMr8L2olMcxAmD0ZKIpJj89BQwiWLRrI/JapdQBX6fn/lHM4i/1 ui1A== X-Gm-Message-State: AFqh2kqXfud5OZOn6df5wiJ0reF+8sUErhELlzDmvnus8xXcGppxxJJX 5r3FLqwgkhTCOhU9k9Zx1oqfQhQC8QNsllbd X-Google-Smtp-Source: AMrXdXv015nFkfR8EEXcC2TLDVJDsTqsYgHOvVObi8EUt8FvHqoagnw3i6ScX6Z/xy/VZWvQVuXWtg== X-Received: by 2002:a05:600c:4d24:b0:3c6:e63e:23d4 with SMTP id u36-20020a05600c4d2400b003c6e63e23d4mr33638070wmp.3.1672771389677; Tue, 03 Jan 2023 10:43:09 -0800 (PST) Received: from Mindolluin.ire.aristanetworks.com ([217.173.96.166]) by smtp.gmail.com with ESMTPSA id i18-20020a5d5232000000b0028e55b44a99sm13811578wra.17.2023.01.03.10.43.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 03 Jan 2023 10:43:09 -0800 (PST) From: Dmitry Safonov To: linux-kernel@vger.kernel.org, David Ahern , Eric Dumazet , Herbert Xu , Jakub Kicinski , "David S. Miller" Cc: Dmitry Safonov , Andy Lutomirski , Bob Gilligan , Dmitry Safonov <0x7f454c46@gmail.com>, Hideaki YOSHIFUJI , Leonard Crestez , Paolo Abeni , Salam Noureddine , netdev@vger.kernel.org, linux-crypto@vger.kernel.org Subject: [PATCH v2 4/5] crypto/net/ipv6: sr: Switch to using crypto_pool Date: Tue, 3 Jan 2023 18:42:56 +0000 Message-Id: <20230103184257.118069-5-dima@arista.com> X-Mailer: git-send-email 2.39.0 In-Reply-To: <20230103184257.118069-1-dima@arista.com> References: <20230103184257.118069-1-dima@arista.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" The conversion to use crypto_pool has the following upsides: - now SR uses asynchronous API which may potentially free CPU cycles and improve performance for of CPU crypto algorithm providers; - hash descriptors now don't have to be allocated on boot, but only at the moment SR starts using HMAC and until the last HMAC secret is deleted; - potentially reuse ahash_request(s) for different users - allocate only one per-CPU scratch buffer rather than a new one for each user - have a common API for net/ users that need ahash on RX/TX fast path Signed-off-by: Dmitry Safonov --- include/net/seg6_hmac.h | 7 -- net/ipv6/Kconfig | 2 +- net/ipv6/seg6.c | 3 - net/ipv6/seg6_hmac.c | 204 ++++++++++++++++------------------------ 4 files changed, 80 insertions(+), 136 deletions(-) diff --git a/include/net/seg6_hmac.h b/include/net/seg6_hmac.h index 2b5d2ee5613e..d6b7820ecda2 100644 --- a/include/net/seg6_hmac.h +++ b/include/net/seg6_hmac.h @@ -32,13 +32,6 @@ struct seg6_hmac_info { u8 alg_id; }; =20 -struct seg6_hmac_algo { - u8 alg_id; - char name[64]; - struct crypto_shash * __percpu *tfms; - struct shash_desc * __percpu *shashs; -}; - extern int seg6_hmac_compute(struct seg6_hmac_info *hinfo, struct ipv6_sr_hdr *hdr, struct in6_addr *saddr, u8 *output); diff --git a/net/ipv6/Kconfig b/net/ipv6/Kconfig index 658bfed1df8b..5be1dab0f178 100644 --- a/net/ipv6/Kconfig +++ b/net/ipv6/Kconfig @@ -304,7 +304,7 @@ config IPV6_SEG6_LWTUNNEL config IPV6_SEG6_HMAC bool "IPv6: Segment Routing HMAC support" depends on IPV6 - select CRYPTO + select CRYPTO_POOL select CRYPTO_HMAC select CRYPTO_SHA1 select CRYPTO_SHA256 diff --git a/net/ipv6/seg6.c b/net/ipv6/seg6.c index 29346a6eec9f..3d66bf6d4c66 100644 --- a/net/ipv6/seg6.c +++ b/net/ipv6/seg6.c @@ -558,9 +558,6 @@ int __init seg6_init(void) =20 void seg6_exit(void) { -#ifdef CONFIG_IPV6_SEG6_HMAC - seg6_hmac_exit(); -#endif #ifdef CONFIG_IPV6_SEG6_LWTUNNEL seg6_iptunnel_exit(); #endif diff --git a/net/ipv6/seg6_hmac.c b/net/ipv6/seg6_hmac.c index d43c50a7310d..3732dd993925 100644 --- a/net/ipv6/seg6_hmac.c +++ b/net/ipv6/seg6_hmac.c @@ -35,6 +35,7 @@ #include =20 #include +#include #include #include #include @@ -70,6 +71,12 @@ static const struct rhashtable_params rht_params =3D { .obj_cmpfn =3D seg6_hmac_cmpfn, }; =20 +struct seg6_hmac_algo { + u8 alg_id; + char name[64]; + int crypto_pool_id; +}; + static struct seg6_hmac_algo hmac_algos[] =3D { { .alg_id =3D SEG6_HMAC_ALGO_SHA1, @@ -115,55 +122,17 @@ static struct seg6_hmac_algo *__hmac_get_algo(u8 alg_= id) return NULL; } =20 -static int __do_hmac(struct seg6_hmac_info *hinfo, const char *text, u8 ps= ize, - u8 *output, int outlen) -{ - struct seg6_hmac_algo *algo; - struct crypto_shash *tfm; - struct shash_desc *shash; - int ret, dgsize; - - algo =3D __hmac_get_algo(hinfo->alg_id); - if (!algo) - return -ENOENT; - - tfm =3D *this_cpu_ptr(algo->tfms); - - dgsize =3D crypto_shash_digestsize(tfm); - if (dgsize > outlen) { - pr_debug("sr-ipv6: __do_hmac: digest size too big (%d / %d)\n", - dgsize, outlen); - return -ENOMEM; - } - - ret =3D crypto_shash_setkey(tfm, hinfo->secret, hinfo->slen); - if (ret < 0) { - pr_debug("sr-ipv6: crypto_shash_setkey failed: err %d\n", ret); - goto failed; - } - - shash =3D *this_cpu_ptr(algo->shashs); - shash->tfm =3D tfm; - - ret =3D crypto_shash_digest(shash, text, psize, output); - if (ret < 0) { - pr_debug("sr-ipv6: crypto_shash_digest failed: err %d\n", ret); - goto failed; - } - - return dgsize; - -failed: - return ret; -} - int seg6_hmac_compute(struct seg6_hmac_info *hinfo, struct ipv6_sr_hdr *hd= r, struct in6_addr *saddr, u8 *output) { __be32 hmackeyid =3D cpu_to_be32(hinfo->hmackeyid); - u8 tmp_out[SEG6_HMAC_MAX_DIGESTSIZE]; + struct crypto_pool_ahash hp; + struct seg6_hmac_algo *algo; int plen, i, dgsize, wrsize; + struct crypto_ahash *tfm; + struct scatterlist sg; char *ring, *off; + int err; =20 /* a 160-byte buffer for digest output allows to store highest known * hash function (RadioGatun) with up to 1216 bits @@ -176,6 +145,10 @@ int seg6_hmac_compute(struct seg6_hmac_info *hinfo, st= ruct ipv6_sr_hdr *hdr, if (plen >=3D SEG6_HMAC_RING_SIZE) return -EMSGSIZE; =20 + algo =3D __hmac_get_algo(hinfo->alg_id); + if (!algo) + return -ENOENT; + /* Let's build the HMAC text on the ring buffer. The text is composed * as follows, in order: * @@ -186,8 +159,36 @@ int seg6_hmac_compute(struct seg6_hmac_info *hinfo, st= ruct ipv6_sr_hdr *hdr, * 5. All segments in the segments list (n * 128 bits) */ =20 - local_bh_disable(); + err =3D crypto_pool_get(algo->crypto_pool_id, (struct crypto_pool *)&hp); + if (err) + return err; + ring =3D this_cpu_ptr(hmac_ring); + + sg_init_one(&sg, ring, plen); + + tfm =3D crypto_ahash_reqtfm(hp.req); + dgsize =3D crypto_ahash_digestsize(tfm); + if (dgsize > SEG6_HMAC_MAX_DIGESTSIZE) { + pr_debug("digest size too big (%d / %d)\n", + dgsize, SEG6_HMAC_MAX_DIGESTSIZE); + err =3D -ENOMEM; + goto err_put_pool; + } + + err =3D crypto_ahash_setkey(tfm, hinfo->secret, hinfo->slen); + if (err) { + pr_debug("crypto_ahash_setkey failed: err %d\n", err); + goto err_put_pool; + } + + err =3D crypto_ahash_init(hp.req); + if (err) + goto err_put_pool; + + ahash_request_set_crypt(hp.req, &sg, + hp.base.scratch, SEG6_HMAC_MAX_DIGESTSIZE); + off =3D ring; =20 /* source address */ @@ -210,21 +211,25 @@ int seg6_hmac_compute(struct seg6_hmac_info *hinfo, s= truct ipv6_sr_hdr *hdr, off +=3D 16; } =20 - dgsize =3D __do_hmac(hinfo, ring, plen, tmp_out, - SEG6_HMAC_MAX_DIGESTSIZE); - local_bh_enable(); + err =3D crypto_ahash_update(hp.req); + if (err) + goto err_put_pool; =20 - if (dgsize < 0) - return dgsize; + err =3D crypto_ahash_final(hp.req); + if (err) + goto err_put_pool; =20 wrsize =3D SEG6_HMAC_FIELD_LEN; if (wrsize > dgsize) wrsize =3D dgsize; =20 memset(output, 0, SEG6_HMAC_FIELD_LEN); - memcpy(output, tmp_out, wrsize); + memcpy(output, hp.base.scratch, wrsize); =20 - return 0; +err_put_pool: + crypto_pool_put(); + + return err; } EXPORT_SYMBOL(seg6_hmac_compute); =20 @@ -291,12 +296,24 @@ EXPORT_SYMBOL(seg6_hmac_info_lookup); int seg6_hmac_info_add(struct net *net, u32 key, struct seg6_hmac_info *hi= nfo) { struct seg6_pernet_data *sdata =3D seg6_pernet(net); - int err; + struct seg6_hmac_algo *algo; + int ret; =20 - err =3D rhashtable_lookup_insert_fast(&sdata->hmac_infos, &hinfo->node, + algo =3D __hmac_get_algo(hinfo->alg_id); + if (!algo) + return -ENOENT; + + ret =3D crypto_pool_alloc_ahash(algo->name); + if (ret < 0) + return ret; + algo->crypto_pool_id =3D ret; + + ret =3D rhashtable_lookup_insert_fast(&sdata->hmac_infos, &hinfo->node, rht_params); + if (ret) + crypto_pool_release(algo->crypto_pool_id); =20 - return err; + return ret; } EXPORT_SYMBOL(seg6_hmac_info_add); =20 @@ -304,6 +321,7 @@ int seg6_hmac_info_del(struct net *net, u32 key) { struct seg6_pernet_data *sdata =3D seg6_pernet(net); struct seg6_hmac_info *hinfo; + struct seg6_hmac_algo *algo; int err =3D -ENOENT; =20 hinfo =3D rhashtable_lookup_fast(&sdata->hmac_infos, &key, rht_params); @@ -315,6 +333,12 @@ int seg6_hmac_info_del(struct net *net, u32 key) if (err) goto out; =20 + algo =3D __hmac_get_algo(hinfo->alg_id); + if (algo) + crypto_pool_release(algo->crypto_pool_id); + else + WARN_ON_ONCE(1); + seg6_hinfo_release(hinfo); =20 out: @@ -348,56 +372,9 @@ int seg6_push_hmac(struct net *net, struct in6_addr *s= addr, } EXPORT_SYMBOL(seg6_push_hmac); =20 -static int seg6_hmac_init_algo(void) -{ - struct seg6_hmac_algo *algo; - struct crypto_shash *tfm; - struct shash_desc *shash; - int i, alg_count, cpu; - - alg_count =3D ARRAY_SIZE(hmac_algos); - - for (i =3D 0; i < alg_count; i++) { - struct crypto_shash **p_tfm; - int shsize; - - algo =3D &hmac_algos[i]; - algo->tfms =3D alloc_percpu(struct crypto_shash *); - if (!algo->tfms) - return -ENOMEM; - - for_each_possible_cpu(cpu) { - tfm =3D crypto_alloc_shash(algo->name, 0, 0); - if (IS_ERR(tfm)) - return PTR_ERR(tfm); - p_tfm =3D per_cpu_ptr(algo->tfms, cpu); - *p_tfm =3D tfm; - } - - p_tfm =3D raw_cpu_ptr(algo->tfms); - tfm =3D *p_tfm; - - shsize =3D sizeof(*shash) + crypto_shash_descsize(tfm); - - algo->shashs =3D alloc_percpu(struct shash_desc *); - if (!algo->shashs) - return -ENOMEM; - - for_each_possible_cpu(cpu) { - shash =3D kzalloc_node(shsize, GFP_KERNEL, - cpu_to_node(cpu)); - if (!shash) - return -ENOMEM; - *per_cpu_ptr(algo->shashs, cpu) =3D shash; - } - } - - return 0; -} - int __init seg6_hmac_init(void) { - return seg6_hmac_init_algo(); + return crypto_pool_reserve_scratch(SEG6_HMAC_MAX_DIGESTSIZE); } =20 int __net_init seg6_hmac_net_init(struct net *net) @@ -407,29 +384,6 @@ int __net_init seg6_hmac_net_init(struct net *net) return rhashtable_init(&sdata->hmac_infos, &rht_params); } =20 -void seg6_hmac_exit(void) -{ - struct seg6_hmac_algo *algo =3D NULL; - int i, alg_count, cpu; - - alg_count =3D ARRAY_SIZE(hmac_algos); - for (i =3D 0; i < alg_count; i++) { - algo =3D &hmac_algos[i]; - for_each_possible_cpu(cpu) { - struct crypto_shash *tfm; - struct shash_desc *shash; - - shash =3D *per_cpu_ptr(algo->shashs, cpu); - kfree(shash); - tfm =3D *per_cpu_ptr(algo->tfms, cpu); - crypto_free_shash(tfm); - } - free_percpu(algo->tfms); - free_percpu(algo->shashs); - } -} -EXPORT_SYMBOL(seg6_hmac_exit); - void __net_exit seg6_hmac_net_exit(struct net *net) { struct seg6_pernet_data *sdata =3D seg6_pernet(net); --=20 2.39.0 From nobody Tue Sep 16 12:29:34 2025 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3B311C677F0 for ; Tue, 3 Jan 2023 18:46:26 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S238578AbjACSqQ (ORCPT ); Tue, 3 Jan 2023 13:46:16 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:50046 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S231240AbjACSpA (ORCPT ); Tue, 3 Jan 2023 13:45:00 -0500 Received: from mail-wm1-x334.google.com (mail-wm1-x334.google.com [IPv6:2a00:1450:4864:20::334]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 9A93115819 for ; Tue, 3 Jan 2023 10:43:12 -0800 (PST) Received: by mail-wm1-x334.google.com with SMTP id m3so14644995wmq.0 for ; Tue, 03 Jan 2023 10:43:12 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=arista.com; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=mCJ7npxHWqpq76xLkWO4j8PK4QmIgUKzcUbzkkrjhHk=; b=CZ5qHYLwlLPQI0PG3wJmLeJmMjKX0fENp37n8nEwoXnjIKGMPeZs6j6FtyOBT6Yp5J 7KgQcqLS3n/tSclWR5Iew/zedibQuNm8CFGB1BLQ/G35Vkng58i/RBKR6P04gmY9bDBU OAbSlnzVH2PjtmHk9m5csZGnCiVfOu9zyD68rt6hrYFzWAaBG1CKSLsxPlvVFk83Exft G9mrsd04Tnt7S3nYb5O1wCqG68BANgvT1EoiN1b/BXRpP5KIB1jmA5SQWCGKTy5rAvbJ HU0a6kiFN1iXZGm2yo/+yeunkHFQxgsBMKPs5l+9vqAzTGamfTJ72LYB/QUwHuMii40r LfSA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=mCJ7npxHWqpq76xLkWO4j8PK4QmIgUKzcUbzkkrjhHk=; b=iyIxTKx2TDfscWwqnKaPIc0smkKbfNG4GLhig2o6hlU+MSGlU8V5/NVuAKtDXEoMsV I2D48NcAjwjloAVhRKXmtegw/9rX+BOGHnNtAezq438YUtXcFIcApuNNS214+7LcqD7M Hzw19Mdc+jun+m7VIe8+deuxy8TmNHYO5cFbEFZ1zAtCyaEpTejZv/TqFMzceBsUC6jN MqEfiREvJshmdzp1E2kslgRm9q3rlqXqoiSah6sPVcB1oDAMm5VJelRcrktMG9KGOPJH TEUBeVAuSy4ut4O5hGtzXS4iM4dIa5HRtAORjXIPJ6V+qQ7kQsWbUWordqmHiCGq/iBL pYew== X-Gm-Message-State: AFqh2kr6V0UxK5iSvnKeKu9naxAU7b5yGED09j2OkZ5811hNZeZUcuio yE/+GX+eRl4MSaTpB3h+/f5p1qBYjausxEBD X-Google-Smtp-Source: AMrXdXtbXCx4js4WZ3kq1vAN/GqYO0G3aTLNuJAMOuTxxZEl1HjjPQFV849khE0HGIATJQTU13cCKw== X-Received: by 2002:a05:600c:4d21:b0:3d2:2a72:2573 with SMTP id u33-20020a05600c4d2100b003d22a722573mr32110883wmp.11.1672771390924; Tue, 03 Jan 2023 10:43:10 -0800 (PST) Received: from Mindolluin.ire.aristanetworks.com ([217.173.96.166]) by smtp.gmail.com with ESMTPSA id i18-20020a5d5232000000b0028e55b44a99sm13811578wra.17.2023.01.03.10.43.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 03 Jan 2023 10:43:10 -0800 (PST) From: Dmitry Safonov To: linux-kernel@vger.kernel.org, David Ahern , Eric Dumazet , Herbert Xu , Jakub Kicinski , "David S. Miller" Cc: Dmitry Safonov , Andy Lutomirski , Bob Gilligan , Dmitry Safonov <0x7f454c46@gmail.com>, Hideaki YOSHIFUJI , Leonard Crestez , Paolo Abeni , Salam Noureddine , netdev@vger.kernel.org, linux-crypto@vger.kernel.org Subject: [PATCH v2 5/5] crypto/Documentation: Add crypto_pool kernel API Date: Tue, 3 Jan 2023 18:42:57 +0000 Message-Id: <20230103184257.118069-6-dima@arista.com> X-Mailer: git-send-email 2.39.0 In-Reply-To: <20230103184257.118069-1-dima@arista.com> References: <20230103184257.118069-1-dima@arista.com> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Content-Type: text/plain; charset="utf-8" Signed-off-by: Dmitry Safonov --- Documentation/crypto/crypto_pool.rst | 33 ++++++++++++++++++++++++++++ 1 file changed, 33 insertions(+) create mode 100644 Documentation/crypto/crypto_pool.rst diff --git a/Documentation/crypto/crypto_pool.rst b/Documentation/crypto/cr= ypto_pool.rst new file mode 100644 index 000000000000..4b8443171421 --- /dev/null +++ b/Documentation/crypto/crypto_pool.rst @@ -0,0 +1,33 @@ +.. SPDX-License-Identifier: GPL-2.0 + +Per-CPU pool of crypto requests +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D + +Overview +-------- +The crypto pool API manages pre-allocated per-CPU pool of crypto requests, +providing ability to use async crypto requests on fast paths, potentially +on atomic contexts. The allocation and initialization of the requests shou= ld +be done before their usage as it's slow-path and may sleep. + +Order of operations +------------------- +You are required to allocate a new pool prior using it and manage its life= time. +You can allocate a per-CPU pool of ahash requests by ``crypto_pool_alloc_a= hash()``. +It will give you a pool id that you can use further on fast-path for hashi= ng. +You can increase the reference counter for an allocated pool via +``crypto_pool_add()``. Decrease the reference counter by ``crypto_pool_rel= ease()``. +When the refcounter hits zero, the pool is scheduled for destruction and y= ou +can't use the corresponding crypto pool id anymore. +Note that ``crypto_pool_add()`` and ``crypto_pool_release()`` must be call= ed +only for an already existing pool and can be called in atomic contexts. + +``crypto_pool_get()`` disables bh and returns you back ``struct crypto_poo= l *``, +which is a generic type for different crypto requests and has ``scratch`` = area +that can be used as a temporary buffer for your operation. + +``crypto_pool_put()`` enables bh back once you've done with your crypto +operation. + +If you need to pre-allocate a bigger per-CPU ``scratch`` area for you requ= ests, +you can use ``crypto_pool_reserve_scratch()``. --=20 2.39.0