From nobody Sat Feb 7 05:57:37 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7FF631DB127; Sat, 26 Apr 2025 06:51:40 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1745650300; cv=none; b=CP4r65iRxZSOgry3lP1YjybHf8tbmN/C90o9LywMNHXXWZWq+RJlVnIvVq1956QZnoAnznUPMeMonCSH+KTLL/GA0vaVJohjAfF2zJ1kpTprV+JB/7YHb/XjIpIadixs+znNX/YJPzP1s034/6ak63U3yV3HRyPWx/lq2DhHJrk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1745650300; c=relaxed/simple; bh=7OTqIkwdmpffU+s8j6eeEIbaROnKmoZXcrXawb6u+CA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=ZC5fg373uXzhI+/F/RAZrG6OeHb+6FkPFWqlGXU1ovxVTkHN2WFXty3o1yedko9vJ/v4gMjrKJzJPQ1Kt5f787GfhLwRgJZNMQ9e5Am48pekngP0tnhfO9ORJ+om528P3lKW8sMJ964WMppCIRmg8TJjxzsR862xSgONQbz63dU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=BnCx345e; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="BnCx345e" Received: by smtp.kernel.org (Postfix) with ESMTPSA id A6F4BC4AF09; Sat, 26 Apr 2025 06:51:39 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1745650300; bh=7OTqIkwdmpffU+s8j6eeEIbaROnKmoZXcrXawb6u+CA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=BnCx345e7k6GYaD7lp07RTdoJALGm0Tfdz0gHeobkIyurP2FL8Q9l0TWo24u40hnm QwPtzIFBmgVtZhxe+efteMcjBppa9UsSZ9mbQFSHvO+4gfUSlf/i2I3vg5P01LQD5+ ipPNE5yLkVGOmLC/TPYIUTknJ50UVM7CRKFjPSDTOAQuMwJ6aW+yM1rIV8hLOCSN5s +44qFB2STZfOSoq0UObYkUXB2E+fmUeFAxdymSRghOIk2UylTDGMHbrP7iaI1p9xAf 6KvALY4xh+QRmqgu8t56rWgm8OO9flVITwIGdbePxGOAR1NAw4UJty/M9XRnDGkhJN 5HHJKUmtoDHwA== From: Eric Biggers To: linux-crypto@vger.kernel.org Cc: linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, sparclinux@vger.kernel.org, linux-s390@vger.kernel.org, x86@kernel.org, Ard Biesheuvel , "Jason A . Donenfeld " , Linus Torvalds Subject: [PATCH 01/13] crypto: sha256 - support arch-optimized lib and expose through shash Date: Fri, 25 Apr 2025 23:50:27 -0700 Message-ID: <20250426065041.1551914-2-ebiggers@kernel.org> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250426065041.1551914-1-ebiggers@kernel.org> References: <20250426065041.1551914-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Eric Biggers As has been done for various other algorithms, rework the design of the SHA-256 library to support arch-optimized implementations, and make crypto/sha256.c expose both generic and arch-optimized shash algorithms that wrap the library functions. This allows users of the SHA-256 library functions to take advantage of the arch-optimized code, and this makes it much simpler to integrate SHA-256 for each architecture. Note that sha256_base.h is not used in the new design. It will be removed once all the architecture-specific code has been updated. Signed-off-by: Eric Biggers --- crypto/Kconfig | 1 + crypto/Makefile | 3 +- crypto/sha256.c | 201 +++++++++++++++++++++++++++++++++ crypto/sha256_generic.c | 102 ----------------- include/crypto/internal/sha2.h | 28 +++++ include/crypto/sha2.h | 15 +-- include/crypto/sha256_base.h | 9 +- lib/crypto/Kconfig | 19 ++++ lib/crypto/sha256.c | 122 +++++++++++++++++--- 9 files changed, 367 insertions(+), 133 deletions(-) create mode 100644 crypto/sha256.c delete mode 100644 crypto/sha256_generic.c create mode 100644 include/crypto/internal/sha2.h diff --git a/crypto/Kconfig b/crypto/Kconfig index 9878286d1d683..daf46053d25a5 100644 --- a/crypto/Kconfig +++ b/crypto/Kconfig @@ -992,10 +992,11 @@ config CRYPTO_SHA1 =20 config CRYPTO_SHA256 tristate "SHA-224 and SHA-256" select CRYPTO_HASH select CRYPTO_LIB_SHA256 + select CRYPTO_LIB_SHA256_GENERIC help SHA-224 and SHA-256 secure hash algorithms (FIPS 180, ISO/IEC 10118-3) =20 This is required for IPsec AH (XFRM_AH) and IPsec ESP (XFRM_ESP). Used by the btrfs filesystem, Ceph, NFS, and SMB. diff --git a/crypto/Makefile b/crypto/Makefile index 5d2f2a28d8a07..2a23926b9f4f5 100644 --- a/crypto/Makefile +++ b/crypto/Makefile @@ -74,11 +74,12 @@ obj-$(CONFIG_CRYPTO_XCBC) +=3D xcbc.o obj-$(CONFIG_CRYPTO_NULL2) +=3D crypto_null.o obj-$(CONFIG_CRYPTO_MD4) +=3D md4.o obj-$(CONFIG_CRYPTO_MD5) +=3D md5.o obj-$(CONFIG_CRYPTO_RMD160) +=3D rmd160.o obj-$(CONFIG_CRYPTO_SHA1) +=3D sha1_generic.o -obj-$(CONFIG_CRYPTO_SHA256) +=3D sha256_generic.o +obj-$(CONFIG_CRYPTO_SHA256) +=3D sha256.o +CFLAGS_sha256.o +=3D -DARCH=3D$(ARCH) obj-$(CONFIG_CRYPTO_SHA512) +=3D sha512_generic.o obj-$(CONFIG_CRYPTO_SHA3) +=3D sha3_generic.o obj-$(CONFIG_CRYPTO_SM3_GENERIC) +=3D sm3_generic.o obj-$(CONFIG_CRYPTO_STREEBOG) +=3D streebog_generic.o obj-$(CONFIG_CRYPTO_WP512) +=3D wp512.o diff --git a/crypto/sha256.c b/crypto/sha256.c new file mode 100644 index 0000000000000..1c2edcf9453dc --- /dev/null +++ b/crypto/sha256.c @@ -0,0 +1,201 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +/* + * Crypto API wrapper for the SHA-256 and SHA-224 library functions + * + * Copyright (c) Jean-Luc Cooke + * Copyright (c) Andrew McDonald + * Copyright (c) 2002 James Morris + * SHA224 Support Copyright 2007 Intel Corporation + */ +#include +#include +#include +#include + +const u8 sha224_zero_message_hash[SHA224_DIGEST_SIZE] =3D { + 0xd1, 0x4a, 0x02, 0x8c, 0x2a, 0x3a, 0x2b, 0xc9, 0x47, + 0x61, 0x02, 0xbb, 0x28, 0x82, 0x34, 0xc4, 0x15, 0xa2, + 0xb0, 0x1f, 0x82, 0x8e, 0xa6, 0x2a, 0xc5, 0xb3, 0xe4, + 0x2f +}; +EXPORT_SYMBOL_GPL(sha224_zero_message_hash); + +const u8 sha256_zero_message_hash[SHA256_DIGEST_SIZE] =3D { + 0xe3, 0xb0, 0xc4, 0x42, 0x98, 0xfc, 0x1c, 0x14, + 0x9a, 0xfb, 0xf4, 0xc8, 0x99, 0x6f, 0xb9, 0x24, + 0x27, 0xae, 0x41, 0xe4, 0x64, 0x9b, 0x93, 0x4c, + 0xa4, 0x95, 0x99, 0x1b, 0x78, 0x52, 0xb8, 0x55 +}; +EXPORT_SYMBOL_GPL(sha256_zero_message_hash); + +static int crypto_sha256_init(struct shash_desc *desc) +{ + sha256_init(shash_desc_ctx(desc)); + return 0; +} + +static int crypto_sha256_update_generic(struct shash_desc *desc, const u8 = *data, + unsigned int len) +{ + sha256_update_generic(shash_desc_ctx(desc), data, len); + return 0; +} + +static int crypto_sha256_update_arch(struct shash_desc *desc, const u8 *da= ta, + unsigned int len) +{ + sha256_update(shash_desc_ctx(desc), data, len); + return 0; +} + +static int crypto_sha256_final_generic(struct shash_desc *desc, u8 *out) +{ + sha256_final_generic(shash_desc_ctx(desc), out); + return 0; +} + +static int crypto_sha256_final_arch(struct shash_desc *desc, u8 *out) +{ + sha256_final(shash_desc_ctx(desc), out); + return 0; +} + +static int crypto_sha256_finup_generic(struct shash_desc *desc, const u8 *= data, + unsigned int len, u8 *out) +{ + struct sha256_state *sctx =3D shash_desc_ctx(desc); + + sha256_update_generic(sctx, data, len); + sha256_final_generic(sctx, out); + return 0; +} + +static int crypto_sha256_finup_arch(struct shash_desc *desc, const u8 *dat= a, + unsigned int len, u8 *out) +{ + struct sha256_state *sctx =3D shash_desc_ctx(desc); + + sha256_update(sctx, data, len); + sha256_final(sctx, out); + return 0; +} + +static int crypto_sha256_digest_generic(struct shash_desc *desc, const u8 = *data, + unsigned int len, u8 *out) +{ + struct sha256_state *sctx =3D shash_desc_ctx(desc); + + sha256_init(sctx); + sha256_update_generic(sctx, data, len); + sha256_final_generic(sctx, out); + return 0; +} + +static int crypto_sha256_digest_arch(struct shash_desc *desc, const u8 *da= ta, + unsigned int len, u8 *out) +{ + sha256(data, len, out); + return 0; +} + +static int crypto_sha224_init(struct shash_desc *desc) +{ + sha224_init(shash_desc_ctx(desc)); + return 0; +} + +static int crypto_sha224_final_generic(struct shash_desc *desc, u8 *out) +{ + sha224_final_generic(shash_desc_ctx(desc), out); + return 0; +} + +static int crypto_sha224_final_arch(struct shash_desc *desc, u8 *out) +{ + sha224_final(shash_desc_ctx(desc), out); + return 0; +} + +static struct shash_alg algs[] =3D { + { + .base.cra_name =3D "sha256", + .base.cra_driver_name =3D "sha256-generic", + .base.cra_priority =3D 100, + .base.cra_blocksize =3D SHA256_BLOCK_SIZE, + .base.cra_module =3D THIS_MODULE, + .digestsize =3D SHA256_DIGEST_SIZE, + .init =3D crypto_sha256_init, + .update =3D crypto_sha256_update_generic, + .final =3D crypto_sha256_final_generic, + .finup =3D crypto_sha256_finup_generic, + .digest =3D crypto_sha256_digest_generic, + .descsize =3D sizeof(struct sha256_state), + }, + { + .base.cra_name =3D "sha224", + .base.cra_driver_name =3D "sha224-generic", + .base.cra_priority =3D 100, + .base.cra_blocksize =3D SHA224_BLOCK_SIZE, + .base.cra_module =3D THIS_MODULE, + .digestsize =3D SHA224_DIGEST_SIZE, + .init =3D crypto_sha224_init, + .update =3D crypto_sha256_update_generic, + .final =3D crypto_sha224_final_generic, + .descsize =3D sizeof(struct sha256_state), + }, + { + .base.cra_name =3D "sha256", + .base.cra_driver_name =3D "sha256-" __stringify(ARCH), + .base.cra_priority =3D 300, + .base.cra_blocksize =3D SHA256_BLOCK_SIZE, + .base.cra_module =3D THIS_MODULE, + .digestsize =3D SHA256_DIGEST_SIZE, + .init =3D crypto_sha256_init, + .update =3D crypto_sha256_update_arch, + .final =3D crypto_sha256_final_arch, + .finup =3D crypto_sha256_finup_arch, + .digest =3D crypto_sha256_digest_arch, + .descsize =3D sizeof(struct sha256_state), + }, + { + .base.cra_name =3D "sha224", + .base.cra_driver_name =3D "sha224-" __stringify(ARCH), + .base.cra_priority =3D 300, + .base.cra_blocksize =3D SHA224_BLOCK_SIZE, + .base.cra_module =3D THIS_MODULE, + .digestsize =3D SHA224_DIGEST_SIZE, + .init =3D crypto_sha224_init, + .update =3D crypto_sha256_update_arch, + .final =3D crypto_sha224_final_arch, + .descsize =3D sizeof(struct sha256_state), + }, +}; + +static unsigned int num_algs; + +static int __init crypto_sha256_mod_init(void) +{ + /* register the arch flavours only if they differ from generic */ + num_algs =3D ARRAY_SIZE(algs); + BUILD_BUG_ON(ARRAY_SIZE(algs) % 2 !=3D 0); + if (!sha256_is_arch_optimized()) + num_algs /=3D 2; + return crypto_register_shashes(algs, ARRAY_SIZE(algs)); +} +subsys_initcall(crypto_sha256_mod_init); + +static void __exit crypto_sha256_mod_exit(void) +{ + crypto_unregister_shashes(algs, num_algs); +} +module_exit(crypto_sha256_mod_exit); + +MODULE_LICENSE("GPL"); +MODULE_DESCRIPTION("Crypto API wrapper for the SHA-256 and SHA-224 library= functions"); + +MODULE_ALIAS_CRYPTO("sha256"); +MODULE_ALIAS_CRYPTO("sha256-generic"); +MODULE_ALIAS_CRYPTO("sha256-" __stringify(ARCH)); +MODULE_ALIAS_CRYPTO("sha224"); +MODULE_ALIAS_CRYPTO("sha224-generic"); +MODULE_ALIAS_CRYPTO("sha224-" __stringify(ARCH)); diff --git a/crypto/sha256_generic.c b/crypto/sha256_generic.c deleted file mode 100644 index 05084e5bbaec8..0000000000000 --- a/crypto/sha256_generic.c +++ /dev/null @@ -1,102 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0-or-later -/* - * Crypto API wrapper for the generic SHA256 code from lib/crypto/sha256.c - * - * Copyright (c) Jean-Luc Cooke - * Copyright (c) Andrew McDonald - * Copyright (c) 2002 James Morris - * SHA224 Support Copyright 2007 Intel Corporation - */ -#include -#include -#include -#include -#include - -const u8 sha224_zero_message_hash[SHA224_DIGEST_SIZE] =3D { - 0xd1, 0x4a, 0x02, 0x8c, 0x2a, 0x3a, 0x2b, 0xc9, 0x47, - 0x61, 0x02, 0xbb, 0x28, 0x82, 0x34, 0xc4, 0x15, 0xa2, - 0xb0, 0x1f, 0x82, 0x8e, 0xa6, 0x2a, 0xc5, 0xb3, 0xe4, - 0x2f -}; -EXPORT_SYMBOL_GPL(sha224_zero_message_hash); - -const u8 sha256_zero_message_hash[SHA256_DIGEST_SIZE] =3D { - 0xe3, 0xb0, 0xc4, 0x42, 0x98, 0xfc, 0x1c, 0x14, - 0x9a, 0xfb, 0xf4, 0xc8, 0x99, 0x6f, 0xb9, 0x24, - 0x27, 0xae, 0x41, 0xe4, 0x64, 0x9b, 0x93, 0x4c, - 0xa4, 0x95, 0x99, 0x1b, 0x78, 0x52, 0xb8, 0x55 -}; -EXPORT_SYMBOL_GPL(sha256_zero_message_hash); - -static void sha256_block(struct crypto_sha256_state *sctx, const u8 *input, - int blocks) -{ - sha256_transform_blocks(sctx, input, blocks); -} - -static int crypto_sha256_update(struct shash_desc *desc, const u8 *data, - unsigned int len) -{ - return sha256_base_do_update_blocks(desc, data, len, sha256_block); -} - -static int crypto_sha256_finup(struct shash_desc *desc, const u8 *data, - unsigned int len, u8 *hash) -{ - sha256_base_do_finup(desc, data, len, sha256_block); - return sha256_base_finish(desc, hash); -} - -static struct shash_alg sha256_algs[2] =3D { { - .digestsize =3D SHA256_DIGEST_SIZE, - .init =3D sha256_base_init, - .update =3D crypto_sha256_update, - .finup =3D crypto_sha256_finup, - .descsize =3D sizeof(struct crypto_sha256_state), - .base =3D { - .cra_name =3D "sha256", - .cra_driver_name=3D "sha256-generic", - .cra_priority =3D 100, - .cra_flags =3D CRYPTO_AHASH_ALG_BLOCK_ONLY | - CRYPTO_AHASH_ALG_FINUP_MAX, - .cra_blocksize =3D SHA256_BLOCK_SIZE, - .cra_module =3D THIS_MODULE, - } -}, { - .digestsize =3D SHA224_DIGEST_SIZE, - .init =3D sha224_base_init, - .update =3D crypto_sha256_update, - .finup =3D crypto_sha256_finup, - .descsize =3D sizeof(struct crypto_sha256_state), - .base =3D { - .cra_name =3D "sha224", - .cra_driver_name=3D "sha224-generic", - .cra_priority =3D 100, - .cra_flags =3D CRYPTO_AHASH_ALG_BLOCK_ONLY | - CRYPTO_AHASH_ALG_FINUP_MAX, - .cra_blocksize =3D SHA224_BLOCK_SIZE, - .cra_module =3D THIS_MODULE, - } -} }; - -static int __init sha256_generic_mod_init(void) -{ - return crypto_register_shashes(sha256_algs, ARRAY_SIZE(sha256_algs)); -} - -static void __exit sha256_generic_mod_fini(void) -{ - crypto_unregister_shashes(sha256_algs, ARRAY_SIZE(sha256_algs)); -} - -subsys_initcall(sha256_generic_mod_init); -module_exit(sha256_generic_mod_fini); - -MODULE_LICENSE("GPL"); -MODULE_DESCRIPTION("SHA-224 and SHA-256 Secure Hash Algorithm"); - -MODULE_ALIAS_CRYPTO("sha224"); -MODULE_ALIAS_CRYPTO("sha224-generic"); -MODULE_ALIAS_CRYPTO("sha256"); -MODULE_ALIAS_CRYPTO("sha256-generic"); diff --git a/include/crypto/internal/sha2.h b/include/crypto/internal/sha2.h new file mode 100644 index 0000000000000..d641c67abcbc3 --- /dev/null +++ b/include/crypto/internal/sha2.h @@ -0,0 +1,28 @@ +/* SPDX-License-Identifier: GPL-2.0-only */ + +#ifndef _CRYPTO_INTERNAL_SHA2_H +#define _CRYPTO_INTERNAL_SHA2_H + +#include + +void sha256_update_generic(struct sha256_state *sctx, + const u8 *data, size_t len); +void sha256_final_generic(struct sha256_state *sctx, + u8 out[SHA256_DIGEST_SIZE]); +void sha224_final_generic(struct sha256_state *sctx, + u8 out[SHA224_DIGEST_SIZE]); + +#if IS_ENABLED(CONFIG_CRYPTO_ARCH_HAVE_LIB_SHA256) +bool sha256_is_arch_optimized(void); +#else +static inline bool sha256_is_arch_optimized(void) +{ + return false; +} +#endif +void sha256_blocks_generic(u32 state[SHA256_STATE_WORDS], + const u8 *data, size_t nblocks); +void sha256_blocks_arch(u32 state[SHA256_STATE_WORDS], + const u8 *data, size_t nblocks); + +#endif /* _CRYPTO_INTERNAL_SHA2_H */ diff --git a/include/crypto/sha2.h b/include/crypto/sha2.h index abbd882f7849f..444484d1b1cfa 100644 --- a/include/crypto/sha2.h +++ b/include/crypto/sha2.h @@ -11,10 +11,11 @@ #define SHA224_DIGEST_SIZE 28 #define SHA224_BLOCK_SIZE 64 =20 #define SHA256_DIGEST_SIZE 32 #define SHA256_BLOCK_SIZE 64 +#define SHA256_STATE_WORDS 8 =20 #define SHA384_DIGEST_SIZE 48 #define SHA384_BLOCK_SIZE 128 =20 #define SHA512_DIGEST_SIZE 64 @@ -64,36 +65,26 @@ extern const u8 sha256_zero_message_hash[SHA256_DIGEST_= SIZE]; extern const u8 sha384_zero_message_hash[SHA384_DIGEST_SIZE]; =20 extern const u8 sha512_zero_message_hash[SHA512_DIGEST_SIZE]; =20 struct crypto_sha256_state { - u32 state[SHA256_DIGEST_SIZE / 4]; + u32 state[SHA256_STATE_WORDS]; u64 count; }; =20 struct sha256_state { - u32 state[SHA256_DIGEST_SIZE / 4]; + u32 state[SHA256_STATE_WORDS]; u64 count; u8 buf[SHA256_BLOCK_SIZE]; }; =20 struct sha512_state { u64 state[SHA512_DIGEST_SIZE / 8]; u64 count[2]; u8 buf[SHA512_BLOCK_SIZE]; }; =20 -/* - * Stand-alone implementation of the SHA256 algorithm. It is designed to - * have as little dependencies as possible so it can be used in the - * kexec_file purgatory. In other cases you should generally use the - * hash APIs from include/crypto/hash.h. Especially when hashing large - * amounts of data as those APIs may be hw-accelerated. - * - * For details see lib/crypto/sha256.c - */ - static inline void sha256_init(struct sha256_state *sctx) { sctx->state[0] =3D SHA256_H0; sctx->state[1] =3D SHA256_H1; sctx->state[2] =3D SHA256_H2; diff --git a/include/crypto/sha256_base.h b/include/crypto/sha256_base.h index 08cd5e41d4fdb..6878fb9c26c04 100644 --- a/include/crypto/sha256_base.h +++ b/include/crypto/sha256_base.h @@ -7,11 +7,11 @@ =20 #ifndef _CRYPTO_SHA256_BASE_H #define _CRYPTO_SHA256_BASE_H =20 #include -#include +#include #include #include #include #include =20 @@ -172,9 +172,12 @@ static inline int sha256_base_finish(struct shash_desc= *desc, u8 *out) struct crypto_sha256_state *sctx =3D shash_desc_ctx(desc); =20 return __sha256_base_finish(sctx->state, out, digest_size); } =20 -void sha256_transform_blocks(struct crypto_sha256_state *sst, - const u8 *input, int blocks); +static inline void sha256_transform_blocks(struct crypto_sha256_state *sst, + const u8 *input, int blocks) +{ + sha256_blocks_generic(sst->state, input, blocks); +} =20 #endif /* _CRYPTO_SHA256_BASE_H */ diff --git a/lib/crypto/Kconfig b/lib/crypto/Kconfig index af2368799579f..7fe678047939b 100644 --- a/lib/crypto/Kconfig +++ b/lib/crypto/Kconfig @@ -137,10 +137,29 @@ config CRYPTO_LIB_CHACHA20POLY1305 config CRYPTO_LIB_SHA1 tristate =20 config CRYPTO_LIB_SHA256 tristate + help + Enable the SHA-256 library interface. This interface may be fulfilled + by either the generic implementation or an arch-specific one, if one + is available and enabled. + +config CRYPTO_ARCH_HAVE_LIB_SHA256 + bool + help + Declares whether the architecture provides an arch-specific + accelerated implementation of the SHA-256 library interface. + +config CRYPTO_LIB_SHA256_GENERIC + tristate + default CRYPTO_LIB_SHA256 if !CRYPTO_ARCH_HAVE_LIB_SHA256 + help + This symbol can be selected by arch implementations of the SHA-256 + library interface that require the generic code as a fallback, e.g., + for SIMD implementations. If no arch specific implementation is + enabled, this implementation serves the users of CRYPTO_LIB_SHA256. =20 config CRYPTO_LIB_SM3 tristate =20 if !KMSAN # avoid false positives from assembly diff --git a/lib/crypto/sha256.c b/lib/crypto/sha256.c index a89bab377de1a..182d1088d8893 100644 --- a/lib/crypto/sha256.c +++ b/lib/crypto/sha256.c @@ -9,16 +9,22 @@ * Copyright (c) Andrew McDonald * Copyright (c) 2002 James Morris * Copyright (c) 2014 Red Hat Inc. */ =20 -#include -#include +#include #include #include #include +#include =20 +/* + * If __DISABLE_EXPORTS is defined, then this file is being compiled for a + * pre-boot environment. In that case, ignore the kconfig options and bui= ld the + * generic code only. + */ +#if IS_ENABLED(CONFIG_CRYPTO_LIB_SHA256_GENERIC) || defined(__DISABLE_EXPO= RTS) static const u32 SHA256_K[] =3D { 0x428a2f98, 0x71374491, 0xb5c0fbcf, 0xe9b5dba5, 0x3956c25b, 0x59f111f1, 0x923f82a4, 0xab1c5ed5, 0xd807aa98, 0x12835b01, 0x243185be, 0x550c7dc3, 0x72be5d74, 0x80deb1fe, 0x9bdc06a7, 0xc19bf174, @@ -67,11 +73,12 @@ static inline void BLEND_OP(int I, u32 *W) t2 =3D e0(a) + Maj(a, b, c); \ d +=3D t1; \ h =3D t1 + t2; \ } while (0) =20 -static void sha256_transform(u32 *state, const u8 *input, u32 *W) +static void sha256_block_generic(u32 state[SHA256_STATE_WORDS], + const u8 *input, u32 W[64]) { u32 a, b, c, d, e, f, g, h; int i; =20 /* load the input */ @@ -116,45 +123,109 @@ static void sha256_transform(u32 *state, const u8 *i= nput, u32 *W) =20 state[0] +=3D a; state[1] +=3D b; state[2] +=3D c; state[3] +=3D d; state[4] +=3D e; state[5] +=3D f; state[6] +=3D g; state[7] +=3D h; } =20 -void sha256_transform_blocks(struct crypto_sha256_state *sst, - const u8 *input, int blocks) +void sha256_blocks_generic(u32 state[SHA256_STATE_WORDS], + const u8 *data, size_t nblocks) { u32 W[64]; =20 do { - sha256_transform(sst->state, input, W); - input +=3D SHA256_BLOCK_SIZE; - } while (--blocks); + sha256_block_generic(state, data, W); + data +=3D SHA256_BLOCK_SIZE; + } while (--nblocks); =20 memzero_explicit(W, sizeof(W)); } -EXPORT_SYMBOL_GPL(sha256_transform_blocks); +EXPORT_SYMBOL_GPL(sha256_blocks_generic); +#endif /* CONFIG_CRYPTO_LIB_SHA256_GENERIC || __DISABLE_EXPORTS */ + +static inline void sha256_blocks(u32 state[SHA256_STATE_WORDS], const u8 *= data, + size_t nblocks, bool force_generic) +{ +#if IS_ENABLED(CONFIG_CRYPTO_ARCH_HAVE_LIB_SHA256) && !defined(__DISABLE_E= XPORTS) + if (!force_generic) + return sha256_blocks_arch(state, data, nblocks); +#endif + sha256_blocks_generic(state, data, nblocks); +} + +static inline void __sha256_update(struct sha256_state *sctx, const u8 *da= ta, + size_t len, bool force_generic) +{ + size_t partial =3D sctx->count % SHA256_BLOCK_SIZE; + + sctx->count +=3D len; + + if (partial + len >=3D SHA256_BLOCK_SIZE) { + size_t nblocks; + + if (partial) { + size_t l =3D SHA256_BLOCK_SIZE - partial; + + memcpy(&sctx->buf[partial], data, l); + data +=3D l; + len -=3D l; + + sha256_blocks(sctx->state, sctx->buf, 1, force_generic); + } + + nblocks =3D len / SHA256_BLOCK_SIZE; + len %=3D SHA256_BLOCK_SIZE; + + if (nblocks) { + sha256_blocks(sctx->state, data, nblocks, + force_generic); + data +=3D nblocks * SHA256_BLOCK_SIZE; + } + partial =3D 0; + } + if (len) + memcpy(&sctx->buf[partial], data, len); +} =20 void sha256_update(struct sha256_state *sctx, const u8 *data, unsigned int= len) { - lib_sha256_base_do_update(sctx, data, len, sha256_transform_blocks); + __sha256_update(sctx, data, len, false); } EXPORT_SYMBOL(sha256_update); =20 -static void __sha256_final(struct sha256_state *sctx, u8 *out, int digest_= size) +static inline void __sha256_final(struct sha256_state *sctx, u8 *out, + size_t digest_size, bool force_generic) { - lib_sha256_base_do_finalize(sctx, sha256_transform_blocks); - lib_sha256_base_finish(sctx, out, digest_size); + const size_t bit_offset =3D SHA256_BLOCK_SIZE - sizeof(__be64); + __be64 *bits =3D (__be64 *)&sctx->buf[bit_offset]; + size_t partial =3D sctx->count % SHA256_BLOCK_SIZE; + size_t i; + + sctx->buf[partial++] =3D 0x80; + if (partial > bit_offset) { + memset(&sctx->buf[partial], 0, SHA256_BLOCK_SIZE - partial); + sha256_blocks(sctx->state, sctx->buf, 1, force_generic); + partial =3D 0; + } + + memset(&sctx->buf[partial], 0, bit_offset - partial); + *bits =3D cpu_to_be64(sctx->count << 3); + sha256_blocks(sctx->state, sctx->buf, 1, force_generic); + + for (i =3D 0; i < digest_size; i +=3D 4) + put_unaligned_be32(sctx->state[i / 4], out + i); + + memzero_explicit(sctx, sizeof(*sctx)); } =20 void sha256_final(struct sha256_state *sctx, u8 *out) { - __sha256_final(sctx, out, 32); + __sha256_final(sctx, out, SHA256_DIGEST_SIZE, false); } EXPORT_SYMBOL(sha256_final); =20 void sha224_final(struct sha256_state *sctx, u8 *out) { - __sha256_final(sctx, out, 28); + __sha256_final(sctx, out, SHA224_DIGEST_SIZE, false); } EXPORT_SYMBOL(sha224_final); =20 void sha256(const u8 *data, unsigned int len, u8 *out) { @@ -164,7 +235,28 @@ void sha256(const u8 *data, unsigned int len, u8 *out) sha256_update(&sctx, data, len); sha256_final(&sctx, out); } EXPORT_SYMBOL(sha256); =20 +#if IS_ENABLED(CONFIG_CRYPTO_LIB_SHA256_GENERIC) && !defined(__DISABLE_EXP= ORTS) +void sha256_update_generic(struct sha256_state *sctx, + const u8 *data, size_t len) +{ + __sha256_update(sctx, data, len, true); +} +EXPORT_SYMBOL(sha256_update_generic); + +void sha256_final_generic(struct sha256_state *sctx, u8 out[SHA256_DIGEST_= SIZE]) +{ + __sha256_final(sctx, out, SHA256_DIGEST_SIZE, true); +} +EXPORT_SYMBOL(sha256_final_generic); + +void sha224_final_generic(struct sha256_state *sctx, u8 out[SHA224_DIGEST_= SIZE]) +{ + __sha256_final(sctx, out, SHA224_DIGEST_SIZE, true); +} +EXPORT_SYMBOL(sha224_final_generic); +#endif + MODULE_DESCRIPTION("SHA-256 Algorithm"); MODULE_LICENSE("GPL"); --=20 2.49.0 From nobody Sat Feb 7 05:57:37 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 312891DF742; Sat, 26 Apr 2025 06:51:40 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1745650301; cv=none; b=s4pu1WJz6bL7H/xJVFSm9z88CvN7PayNY9zdSTSxIddfc6IKr7ReNSUY1YxbGDM8eSr/iEPcB0OOmWen0JS2UwbiZJPe8XDHmxc1zqamgGnBV+0+xuKsSLpPiZjM+jOLHtfoUT+rHqJoyRflhtsplxeBqpwzkawOhoPXvy5Gj78= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1745650301; c=relaxed/simple; bh=Z8g2QC8t5xCKSohXvmUSVfT8rth+Vg9Lxt049FddoYE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=VaV32B1MUY+LpKGXYih7ZGZTcN2etBI8M9hr1/OpSBAF+05aePiFqMfZH35CTbpJbHIb9/rJ0S3jCgZBNz44xhhbNgXWDq0PVvRerRCSh4bJreMvnEaJRh8SVy10yl+oGp7SWb3JtZTr96xQ1u0fBvHNlZlFDge0m+Ispl8ovDI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=iPg3d5nn; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="iPg3d5nn" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3A9A5C4CEED; Sat, 26 Apr 2025 06:51:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1745650300; bh=Z8g2QC8t5xCKSohXvmUSVfT8rth+Vg9Lxt049FddoYE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=iPg3d5nn3ULW7JRpe6IVXE1Bk7r2X7yA9WibbhTzMjeRf9ALqm6H7ARsM4mfuGXo4 fNqydbIM3GqFYiGV6ZMlr4I/vtUuBFqQJk/v1J5IXcIIUjDHFow0ibNJZFDBqHzjqm M1/gdZG9l/fA5MJZ3B0QSxk5lxA1fqxFBl+XOawMwp58l9mSU5YeDoPTnAg6fApxn9 fRUEzLCt+7zscEpkq6U8F7V3ANws7z16ZLS6BvmViMe8SSfwAFv0dzA4bSl2EuDRpZ 7cXp/pEhJTAX9RAx8tYs60jO7NhS2N5yk85ksULRDBzMgTblSErI7gjXqB5l0ykmp1 Cm4xOy8vte8ZQ== From: Eric Biggers To: linux-crypto@vger.kernel.org Cc: linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, sparclinux@vger.kernel.org, linux-s390@vger.kernel.org, x86@kernel.org, Ard Biesheuvel , "Jason A . Donenfeld " , Linus Torvalds Subject: [PATCH 02/13] crypto: arm/sha256 - implement library instead of shash Date: Fri, 25 Apr 2025 23:50:28 -0700 Message-ID: <20250426065041.1551914-3-ebiggers@kernel.org> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250426065041.1551914-1-ebiggers@kernel.org> References: <20250426065041.1551914-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Type: text/plain; charset="utf-8" Content-Transfer-Encoding: quoted-printable From: Eric Biggers Instead of providing crypto_shash algorithms for the arch-optimized SHA-256 code, instead implement the SHA-256 library. This is much simpler, it makes the SHA-256 library functions be arch-optimized, and it fixes the longstanding issue where the arch-optimized SHA-256 was disabled by default. SHA-256 still remains available through crypto_shash, but individual architectures no longer need to handle it. To merge the scalar, NEON, and CE code all into one module cleanly, add !CPU_V7M as a direct dependency of the CE code. Previously, !CPU_V7M was only a direct dependency of the scalar and NEON code. The result is still the same because CPU_V7M implies !KERNEL_MODE_NEON, so !CPU_V7M was already an indirect dependency of the CE code. To match sha256_blocks_arch(), change the type of the nblocks parameter of the assembly functions from int to size_t. The assembly functions actually already treated it as size_t. While renaming the assembly files, also fix the naming quirk where "sha2" meant sha256. (SHA-512 is also part of SHA-2.) Signed-off-by: Eric Biggers Reviewed-by: Ard Biesheuvel --- arch/arm/configs/exynos_defconfig | 1 - arch/arm/configs/milbeaut_m10v_defconfig | 1 - arch/arm/configs/multi_v7_defconfig | 1 - arch/arm/configs/omap2plus_defconfig | 1 - arch/arm/configs/pxa_defconfig | 1 - arch/arm/crypto/Kconfig | 21 ---- arch/arm/crypto/Makefile | 8 +- arch/arm/crypto/sha2-ce-glue.c | 87 -------------- arch/arm/crypto/sha256_glue.c | 107 ------------------ arch/arm/crypto/sha256_glue.h | 9 -- arch/arm/crypto/sha256_neon_glue.c | 75 ------------ arch/arm/lib/crypto/.gitignore | 1 + arch/arm/lib/crypto/Kconfig | 6 + arch/arm/lib/crypto/Makefile | 8 +- arch/arm/{ =3D> lib}/crypto/sha256-armv4.pl | 0 .../sha2-ce-core.S =3D> lib/crypto/sha256-ce.S} | 10 +- arch/arm/lib/crypto/sha256.c | 64 +++++++++++ 17 files changed, 84 insertions(+), 317 deletions(-) delete mode 100644 arch/arm/crypto/sha2-ce-glue.c delete mode 100644 arch/arm/crypto/sha256_glue.c delete mode 100644 arch/arm/crypto/sha256_glue.h delete mode 100644 arch/arm/crypto/sha256_neon_glue.c rename arch/arm/{ =3D> lib}/crypto/sha256-armv4.pl (100%) rename arch/arm/{crypto/sha2-ce-core.S =3D> lib/crypto/sha256-ce.S} (91%) create mode 100644 arch/arm/lib/crypto/sha256.c diff --git a/arch/arm/configs/exynos_defconfig b/arch/arm/configs/exynos_de= fconfig index 7ad48fdda1dac..244dd5dec98bd 100644 --- a/arch/arm/configs/exynos_defconfig +++ b/arch/arm/configs/exynos_defconfig @@ -362,11 +362,10 @@ CONFIG_CRYPTO_LZ4=3Dm CONFIG_CRYPTO_USER_API_HASH=3Dm CONFIG_CRYPTO_USER_API_SKCIPHER=3Dm CONFIG_CRYPTO_USER_API_RNG=3Dm CONFIG_CRYPTO_USER_API_AEAD=3Dm CONFIG_CRYPTO_SHA1_ARM_NEON=3Dm -CONFIG_CRYPTO_SHA256_ARM=3Dm CONFIG_CRYPTO_SHA512_ARM=3Dm CONFIG_CRYPTO_AES_ARM_BS=3Dm CONFIG_CRYPTO_CHACHA20_NEON=3Dm CONFIG_CRYPTO_DEV_EXYNOS_RNG=3Dy CONFIG_CRYPTO_DEV_S5P=3Dy diff --git a/arch/arm/configs/milbeaut_m10v_defconfig b/arch/arm/configs/mi= lbeaut_m10v_defconfig index acd16204f8d7f..fce33c1eb65bf 100644 --- a/arch/arm/configs/milbeaut_m10v_defconfig +++ b/arch/arm/configs/milbeaut_m10v_defconfig @@ -99,11 +99,10 @@ CONFIG_CRYPTO_MANAGER=3Dy CONFIG_CRYPTO_AES=3Dy CONFIG_CRYPTO_SEQIV=3Dm CONFIG_CRYPTO_GHASH_ARM_CE=3Dm CONFIG_CRYPTO_SHA1_ARM_NEON=3Dm CONFIG_CRYPTO_SHA1_ARM_CE=3Dm -CONFIG_CRYPTO_SHA2_ARM_CE=3Dm CONFIG_CRYPTO_SHA512_ARM=3Dm CONFIG_CRYPTO_AES_ARM=3Dm CONFIG_CRYPTO_AES_ARM_BS=3Dm CONFIG_CRYPTO_AES_ARM_CE=3Dm CONFIG_CRYPTO_CHACHA20_NEON=3Dm diff --git a/arch/arm/configs/multi_v7_defconfig b/arch/arm/configs/multi_v= 7_defconfig index ad037c175fdb0..96178acedad0b 100644 --- a/arch/arm/configs/multi_v7_defconfig +++ b/arch/arm/configs/multi_v7_defconfig @@ -1299,11 +1299,10 @@ CONFIG_CRYPTO_USER_API_SKCIPHER=3Dm CONFIG_CRYPTO_USER_API_RNG=3Dm CONFIG_CRYPTO_USER_API_AEAD=3Dm CONFIG_CRYPTO_GHASH_ARM_CE=3Dm CONFIG_CRYPTO_SHA1_ARM_NEON=3Dm CONFIG_CRYPTO_SHA1_ARM_CE=3Dm -CONFIG_CRYPTO_SHA2_ARM_CE=3Dm CONFIG_CRYPTO_SHA512_ARM=3Dm CONFIG_CRYPTO_AES_ARM=3Dm CONFIG_CRYPTO_AES_ARM_BS=3Dm CONFIG_CRYPTO_AES_ARM_CE=3Dm CONFIG_CRYPTO_CHACHA20_NEON=3Dm diff --git a/arch/arm/configs/omap2plus_defconfig b/arch/arm/configs/omap2p= lus_defconfig index 113d6dfe52435..57d9e4dba29e3 100644 --- a/arch/arm/configs/omap2plus_defconfig +++ b/arch/arm/configs/omap2plus_defconfig @@ -695,11 +695,10 @@ CONFIG_NLS_CODEPAGE_437=3Dy CONFIG_NLS_ISO8859_1=3Dy CONFIG_SECURITY=3Dy CONFIG_CRYPTO_MICHAEL_MIC=3Dy CONFIG_CRYPTO_GHASH_ARM_CE=3Dm CONFIG_CRYPTO_SHA1_ARM_NEON=3Dm -CONFIG_CRYPTO_SHA256_ARM=3Dm CONFIG_CRYPTO_SHA512_ARM=3Dm CONFIG_CRYPTO_AES_ARM=3Dm CONFIG_CRYPTO_AES_ARM_BS=3Dm CONFIG_CRYPTO_CHACHA20_NEON=3Dm CONFIG_CRYPTO_DEV_OMAP=3Dm diff --git a/arch/arm/configs/pxa_defconfig b/arch/arm/configs/pxa_defconfig index de0ac8f521d76..fa631523616f8 100644 --- a/arch/arm/configs/pxa_defconfig +++ b/arch/arm/configs/pxa_defconfig @@ -658,11 +658,10 @@ CONFIG_CRYPTO_WP512=3Dm CONFIG_CRYPTO_ANUBIS=3Dm CONFIG_CRYPTO_XCBC=3Dm CONFIG_CRYPTO_DEFLATE=3Dy CONFIG_CRYPTO_LZO=3Dy CONFIG_CRYPTO_SHA1_ARM=3Dm -CONFIG_CRYPTO_SHA256_ARM=3Dm CONFIG_CRYPTO_SHA512_ARM=3Dm CONFIG_CRYPTO_AES_ARM=3Dm CONFIG_CRC_CCITT=3Dy CONFIG_CRC_T10DIF=3Dm CONFIG_FONTS=3Dy diff --git a/arch/arm/crypto/Kconfig b/arch/arm/crypto/Kconfig index 1f889d6bab77d..7efb9a8596e4e 100644 --- a/arch/arm/crypto/Kconfig +++ b/arch/arm/crypto/Kconfig @@ -91,31 +91,10 @@ config CRYPTO_SHA1_ARM_CE help SHA-1 secure hash algorithm (FIPS 180) =20 Architecture: arm using ARMv8 Crypto Extensions =20 -config CRYPTO_SHA2_ARM_CE - tristate "Hash functions: SHA-224 and SHA-256 (ARMv8 Crypto Extensions)" - depends on KERNEL_MODE_NEON - select CRYPTO_SHA256_ARM - select CRYPTO_HASH - help - SHA-224 and SHA-256 secure hash algorithms (FIPS 180) - - Architecture: arm using - - ARMv8 Crypto Extensions - -config CRYPTO_SHA256_ARM - tristate "Hash functions: SHA-224 and SHA-256 (NEON)" - select CRYPTO_HASH - depends on !CPU_V7M - help - SHA-224 and SHA-256 secure hash algorithms (FIPS 180) - - Architecture: arm using - - NEON (Advanced SIMD) extensions - config CRYPTO_SHA512_ARM tristate "Hash functions: SHA-384 and SHA-512 (NEON)" select CRYPTO_HASH depends on !CPU_V7M help diff --git a/arch/arm/crypto/Makefile b/arch/arm/crypto/Makefile index ecabe6603e080..8479137c6e800 100644 --- a/arch/arm/crypto/Makefile +++ b/arch/arm/crypto/Makefile @@ -5,32 +5,27 @@ =20 obj-$(CONFIG_CRYPTO_AES_ARM) +=3D aes-arm.o obj-$(CONFIG_CRYPTO_AES_ARM_BS) +=3D aes-arm-bs.o obj-$(CONFIG_CRYPTO_SHA1_ARM) +=3D sha1-arm.o obj-$(CONFIG_CRYPTO_SHA1_ARM_NEON) +=3D sha1-arm-neon.o -obj-$(CONFIG_CRYPTO_SHA256_ARM) +=3D sha256-arm.o obj-$(CONFIG_CRYPTO_SHA512_ARM) +=3D sha512-arm.o obj-$(CONFIG_CRYPTO_BLAKE2B_NEON) +=3D blake2b-neon.o obj-$(CONFIG_CRYPTO_NHPOLY1305_NEON) +=3D nhpoly1305-neon.o obj-$(CONFIG_CRYPTO_CURVE25519_NEON) +=3D curve25519-neon.o =20 obj-$(CONFIG_CRYPTO_AES_ARM_CE) +=3D aes-arm-ce.o obj-$(CONFIG_CRYPTO_SHA1_ARM_CE) +=3D sha1-arm-ce.o -obj-$(CONFIG_CRYPTO_SHA2_ARM_CE) +=3D sha2-arm-ce.o obj-$(CONFIG_CRYPTO_GHASH_ARM_CE) +=3D ghash-arm-ce.o =20 aes-arm-y :=3D aes-cipher-core.o aes-cipher-glue.o aes-arm-bs-y :=3D aes-neonbs-core.o aes-neonbs-glue.o sha1-arm-y :=3D sha1-armv4-large.o sha1_glue.o sha1-arm-neon-y :=3D sha1-armv7-neon.o sha1_neon_glue.o -sha256-arm-neon-$(CONFIG_KERNEL_MODE_NEON) :=3D sha256_neon_glue.o -sha256-arm-y :=3D sha256-core.o sha256_glue.o $(sha256-arm-neon-y) sha512-arm-neon-$(CONFIG_KERNEL_MODE_NEON) :=3D sha512-neon-glue.o sha512-arm-y :=3D sha512-core.o sha512-glue.o $(sha512-arm-neon-y) blake2b-neon-y :=3D blake2b-neon-core.o blake2b-neon-glue.o sha1-arm-ce-y :=3D sha1-ce-core.o sha1-ce-glue.o -sha2-arm-ce-y :=3D sha2-ce-core.o sha2-ce-glue.o aes-arm-ce-y :=3D aes-ce-core.o aes-ce-glue.o ghash-arm-ce-y :=3D ghash-ce-core.o ghash-ce-glue.o nhpoly1305-neon-y :=3D nh-neon-core.o nhpoly1305-neon-glue.o curve25519-neon-y :=3D curve25519-core.o curve25519-glue.o =20 @@ -38,11 +33,10 @@ quiet_cmd_perl =3D PERL $@ cmd_perl =3D $(PERL) $(<) > $(@) =20 $(obj)/%-core.S: $(src)/%-armv4.pl $(call cmd,perl) =20 -clean-files +=3D sha256-core.S sha512-core.S +clean-files +=3D sha512-core.S =20 aflags-thumb2-$(CONFIG_THUMB2_KERNEL) :=3D -U__thumb2__ -D__thumb2__=3D1 =20 -AFLAGS_sha256-core.o +=3D $(aflags-thumb2-y) AFLAGS_sha512-core.o +=3D $(aflags-thumb2-y) diff --git a/arch/arm/crypto/sha2-ce-glue.c b/arch/arm/crypto/sha2-ce-glue.c deleted file mode 100644 index 1e9d16f796787..0000000000000 --- a/arch/arm/crypto/sha2-ce-glue.c +++ /dev/null @@ -1,87 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0-only -/* - * sha2-ce-glue.c - SHA-224/SHA-256 using ARMv8 Crypto Extensions - * - * Copyright (C) 2015 Linaro Ltd - */ - -#include -#include -#include -#include -#include -#include -#include - -MODULE_DESCRIPTION("SHA-224/SHA-256 secure hash using ARMv8 Crypto Extensi= ons"); -MODULE_AUTHOR("Ard Biesheuvel "); -MODULE_LICENSE("GPL v2"); - -asmlinkage void sha2_ce_transform(struct crypto_sha256_state *sst, - u8 const *src, int blocks); - -static int sha2_ce_update(struct shash_desc *desc, const u8 *data, - unsigned int len) -{ - int remain; - - kernel_neon_begin(); - remain =3D sha256_base_do_update_blocks(desc, data, len, - sha2_ce_transform); - kernel_neon_end(); - return remain; -} - -static int sha2_ce_finup(struct shash_desc *desc, const u8 *data, - unsigned int len, u8 *out) -{ - kernel_neon_begin(); - sha256_base_do_finup(desc, data, len, sha2_ce_transform); - kernel_neon_end(); - return sha256_base_finish(desc, out); -} - -static struct shash_alg algs[] =3D { { - .init =3D sha224_base_init, - .update =3D sha2_ce_update, - .finup =3D sha2_ce_finup, - .descsize =3D sizeof(struct crypto_sha256_state), - .digestsize =3D SHA224_DIGEST_SIZE, - .base =3D { - .cra_name =3D "sha224", - .cra_driver_name =3D "sha224-ce", - .cra_priority =3D 300, - .cra_flags =3D CRYPTO_AHASH_ALG_BLOCK_ONLY | - CRYPTO_AHASH_ALG_FINUP_MAX, - .cra_blocksize =3D SHA256_BLOCK_SIZE, - .cra_module =3D THIS_MODULE, - } -}, { - .init =3D sha256_base_init, - .update =3D sha2_ce_update, - .finup =3D sha2_ce_finup, - .descsize =3D sizeof(struct crypto_sha256_state), - .digestsize =3D SHA256_DIGEST_SIZE, - .base =3D { - .cra_name =3D "sha256", - .cra_driver_name =3D "sha256-ce", - .cra_priority =3D 300, - .cra_flags =3D CRYPTO_AHASH_ALG_BLOCK_ONLY | - CRYPTO_AHASH_ALG_FINUP_MAX, - .cra_blocksize =3D SHA256_BLOCK_SIZE, - .cra_module =3D THIS_MODULE, - } -} }; - -static int __init sha2_ce_mod_init(void) -{ - return crypto_register_shashes(algs, ARRAY_SIZE(algs)); -} - -static void __exit sha2_ce_mod_fini(void) -{ - crypto_unregister_shashes(algs, ARRAY_SIZE(algs)); -} - -module_cpu_feature_match(SHA2, sha2_ce_mod_init); -module_exit(sha2_ce_mod_fini); diff --git a/arch/arm/crypto/sha256_glue.c b/arch/arm/crypto/sha256_glue.c deleted file mode 100644 index d04c4e6bae6d3..0000000000000 --- a/arch/arm/crypto/sha256_glue.c +++ /dev/null @@ -1,107 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0-or-later -/* - * Glue code for the SHA256 Secure Hash Algorithm assembly implementation - * using optimized ARM assembler and NEON instructions. - * - * Copyright =C2=A9 2015 Google Inc. - * - * This file is based on sha256_ssse3_glue.c: - * Copyright (C) 2013 Intel Corporation - * Author: Tim Chen - */ - -#include -#include -#include -#include -#include -#include - -#include "sha256_glue.h" - -asmlinkage void sha256_block_data_order(struct crypto_sha256_state *state, - const u8 *data, int num_blks); - -static int crypto_sha256_arm_update(struct shash_desc *desc, const u8 *dat= a, - unsigned int len) -{ - /* make sure casting to sha256_block_fn() is safe */ - BUILD_BUG_ON(offsetof(struct crypto_sha256_state, state) !=3D 0); - - return sha256_base_do_update_blocks(desc, data, len, - sha256_block_data_order); -} - -static int crypto_sha256_arm_finup(struct shash_desc *desc, const u8 *data, - unsigned int len, u8 *out) -{ - sha256_base_do_finup(desc, data, len, sha256_block_data_order); - return sha256_base_finish(desc, out); -} - -static struct shash_alg algs[] =3D { { - .digestsize =3D SHA256_DIGEST_SIZE, - .init =3D sha256_base_init, - .update =3D crypto_sha256_arm_update, - .finup =3D crypto_sha256_arm_finup, - .descsize =3D sizeof(struct crypto_sha256_state), - .base =3D { - .cra_name =3D "sha256", - .cra_driver_name =3D "sha256-asm", - .cra_priority =3D 150, - .cra_flags =3D CRYPTO_AHASH_ALG_BLOCK_ONLY | - CRYPTO_AHASH_ALG_FINUP_MAX, - .cra_blocksize =3D SHA256_BLOCK_SIZE, - .cra_module =3D THIS_MODULE, - } -}, { - .digestsize =3D SHA224_DIGEST_SIZE, - .init =3D sha224_base_init, - .update =3D crypto_sha256_arm_update, - .finup =3D crypto_sha256_arm_finup, - .descsize =3D sizeof(struct crypto_sha256_state), - .base =3D { - .cra_name =3D "sha224", - .cra_driver_name =3D "sha224-asm", - .cra_priority =3D 150, - .cra_flags =3D CRYPTO_AHASH_ALG_BLOCK_ONLY | - CRYPTO_AHASH_ALG_FINUP_MAX, - .cra_blocksize =3D SHA224_BLOCK_SIZE, - .cra_module =3D THIS_MODULE, - } -} }; - -static int __init sha256_mod_init(void) -{ - int res =3D crypto_register_shashes(algs, ARRAY_SIZE(algs)); - - if (res < 0) - return res; - - if (IS_ENABLED(CONFIG_KERNEL_MODE_NEON) && cpu_has_neon()) { - res =3D crypto_register_shashes(sha256_neon_algs, - ARRAY_SIZE(sha256_neon_algs)); - - if (res < 0) - crypto_unregister_shashes(algs, ARRAY_SIZE(algs)); - } - - return res; -} - -static void __exit sha256_mod_fini(void) -{ - crypto_unregister_shashes(algs, ARRAY_SIZE(algs)); - - if (IS_ENABLED(CONFIG_KERNEL_MODE_NEON) && cpu_has_neon()) - crypto_unregister_shashes(sha256_neon_algs, - ARRAY_SIZE(sha256_neon_algs)); -} - -module_init(sha256_mod_init); -module_exit(sha256_mod_fini); - -MODULE_LICENSE("GPL"); -MODULE_DESCRIPTION("SHA256 Secure Hash Algorithm (ARM), including NEON"); - -MODULE_ALIAS_CRYPTO("sha256"); diff --git a/arch/arm/crypto/sha256_glue.h b/arch/arm/crypto/sha256_glue.h deleted file mode 100644 index 9881c9a115d1f..0000000000000 --- a/arch/arm/crypto/sha256_glue.h +++ /dev/null @@ -1,9 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0 */ -#ifndef _CRYPTO_SHA256_GLUE_H -#define _CRYPTO_SHA256_GLUE_H - -#include - -extern struct shash_alg sha256_neon_algs[2]; - -#endif /* _CRYPTO_SHA256_GLUE_H */ diff --git a/arch/arm/crypto/sha256_neon_glue.c b/arch/arm/crypto/sha256_ne= on_glue.c deleted file mode 100644 index 76eb3cdc21c96..0000000000000 --- a/arch/arm/crypto/sha256_neon_glue.c +++ /dev/null @@ -1,75 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0-or-later -/* - * Glue code for the SHA256 Secure Hash Algorithm assembly implementation - * using NEON instructions. - * - * Copyright =C2=A9 2015 Google Inc. - * - * This file is based on sha512_neon_glue.c: - * Copyright =C2=A9 2014 Jussi Kivilinna - */ - -#include -#include -#include -#include -#include -#include - -#include "sha256_glue.h" - -asmlinkage void sha256_block_data_order_neon( - struct crypto_sha256_state *digest, const u8 *data, int num_blks); - -static int crypto_sha256_neon_update(struct shash_desc *desc, const u8 *da= ta, - unsigned int len) -{ - int remain; - - kernel_neon_begin(); - remain =3D sha256_base_do_update_blocks(desc, data, len, - sha256_block_data_order_neon); - kernel_neon_end(); - return remain; -} - -static int crypto_sha256_neon_finup(struct shash_desc *desc, const u8 *dat= a, - unsigned int len, u8 *out) -{ - kernel_neon_begin(); - sha256_base_do_finup(desc, data, len, sha256_block_data_order_neon); - kernel_neon_end(); - return sha256_base_finish(desc, out); -} - -struct shash_alg sha256_neon_algs[] =3D { { - .digestsize =3D SHA256_DIGEST_SIZE, - .init =3D sha256_base_init, - .update =3D crypto_sha256_neon_update, - .finup =3D crypto_sha256_neon_finup, - .descsize =3D sizeof(struct crypto_sha256_state), - .base =3D { - .cra_name =3D "sha256", - .cra_driver_name =3D "sha256-neon", - .cra_priority =3D 250, - .cra_flags =3D CRYPTO_AHASH_ALG_BLOCK_ONLY | - CRYPTO_AHASH_ALG_FINUP_MAX, - .cra_blocksize =3D SHA256_BLOCK_SIZE, - .cra_module =3D THIS_MODULE, - } -}, { - .digestsize =3D SHA224_DIGEST_SIZE, - .init =3D sha224_base_init, - .update =3D crypto_sha256_neon_update, - .finup =3D crypto_sha256_neon_finup, - .descsize =3D sizeof(struct crypto_sha256_state), - .base =3D { - .cra_name =3D "sha224", - .cra_driver_name =3D "sha224-neon", - .cra_priority =3D 250, - .cra_flags =3D CRYPTO_AHASH_ALG_BLOCK_ONLY | - CRYPTO_AHASH_ALG_FINUP_MAX, - .cra_blocksize =3D SHA224_BLOCK_SIZE, - .cra_module =3D THIS_MODULE, - } -} }; diff --git a/arch/arm/lib/crypto/.gitignore b/arch/arm/lib/crypto/.gitignore index 0d47d4f21c6de..12d74d8b03d0a 100644 --- a/arch/arm/lib/crypto/.gitignore +++ b/arch/arm/lib/crypto/.gitignore @@ -1,2 +1,3 @@ # SPDX-License-Identifier: GPL-2.0-only poly1305-core.S +sha256-core.S diff --git a/arch/arm/lib/crypto/Kconfig b/arch/arm/lib/crypto/Kconfig index e8444fd0aae30..9f3ff30f40328 100644 --- a/arch/arm/lib/crypto/Kconfig +++ b/arch/arm/lib/crypto/Kconfig @@ -20,5 +20,11 @@ config CRYPTO_CHACHA20_NEON =20 config CRYPTO_POLY1305_ARM tristate default CRYPTO_LIB_POLY1305 select CRYPTO_ARCH_HAVE_LIB_POLY1305 + +config CRYPTO_SHA256_ARM + tristate + depends on !CPU_V7M + default CRYPTO_LIB_SHA256 + select CRYPTO_ARCH_HAVE_LIB_SHA256 diff --git a/arch/arm/lib/crypto/Makefile b/arch/arm/lib/crypto/Makefile index 4c042a4c77ed6..431f77c3ff6fd 100644 --- a/arch/arm/lib/crypto/Makefile +++ b/arch/arm/lib/crypto/Makefile @@ -8,19 +8,25 @@ chacha-neon-y :=3D chacha-scalar-core.o chacha-glue.o chacha-neon-$(CONFIG_KERNEL_MODE_NEON) +=3D chacha-neon-core.o =20 obj-$(CONFIG_CRYPTO_POLY1305_ARM) +=3D poly1305-arm.o poly1305-arm-y :=3D poly1305-core.o poly1305-glue.o =20 +obj-$(CONFIG_CRYPTO_SHA256_ARM) +=3D sha256-arm.o +sha256-arm-y :=3D sha256.o sha256-core.o +sha256-arm-$(CONFIG_KERNEL_MODE_NEON) +=3D sha256-ce.o + quiet_cmd_perl =3D PERL $@ cmd_perl =3D $(PERL) $(<) > $(@) =20 $(obj)/%-core.S: $(src)/%-armv4.pl $(call cmd,perl) =20 -clean-files +=3D poly1305-core.S +clean-files +=3D poly1305-core.S sha256-core.S =20 aflags-thumb2-$(CONFIG_THUMB2_KERNEL) :=3D -U__thumb2__ -D__thumb2__=3D1 =20 # massage the perlasm code a bit so we only get the NEON routine if we nee= d it poly1305-aflags-$(CONFIG_CPU_V7) :=3D -U__LINUX_ARM_ARCH__ -D__LINUX_ARM_A= RCH__=3D5 poly1305-aflags-$(CONFIG_KERNEL_MODE_NEON) :=3D -U__LINUX_ARM_ARCH__ -D__L= INUX_ARM_ARCH__=3D7 AFLAGS_poly1305-core.o +=3D $(poly1305-aflags-y) $(aflags-thumb2-y) + +AFLAGS_sha256-core.o +=3D $(aflags-thumb2-y) diff --git a/arch/arm/crypto/sha256-armv4.pl b/arch/arm/lib/crypto/sha256-a= rmv4.pl similarity index 100% rename from arch/arm/crypto/sha256-armv4.pl rename to arch/arm/lib/crypto/sha256-armv4.pl diff --git a/arch/arm/crypto/sha2-ce-core.S b/arch/arm/lib/crypto/sha256-ce= .S similarity index 91% rename from arch/arm/crypto/sha2-ce-core.S rename to arch/arm/lib/crypto/sha256-ce.S index b6369d2440a19..ac2c9b01b22d2 100644 --- a/arch/arm/crypto/sha2-ce-core.S +++ b/arch/arm/lib/crypto/sha256-ce.S @@ -1,8 +1,8 @@ /* SPDX-License-Identifier: GPL-2.0-only */ /* - * sha2-ce-core.S - SHA-224/256 secure hash using ARMv8 Crypto Extensions + * sha256-ce.S - SHA-224/256 secure hash using ARMv8 Crypto Extensions * * Copyright (C) 2015 Linaro Ltd. * Author: Ard Biesheuvel */ =20 @@ -65,14 +65,14 @@ .word 0x391c0cb3, 0x4ed8aa4a, 0x5b9cca4f, 0x682e6ff3 .word 0x748f82ee, 0x78a5636f, 0x84c87814, 0x8cc70208 .word 0x90befffa, 0xa4506ceb, 0xbef9a3f7, 0xc67178f2 =20 /* - * void sha2_ce_transform(struct sha256_state *sst, u8 const *src, - int blocks); + * void sha256_ce_transform(u32 state[SHA256_STATE_WORDS], + * const u8 *data, size_t nblocks); */ -ENTRY(sha2_ce_transform) +ENTRY(sha256_ce_transform) /* load state */ vld1.32 {dga-dgb}, [r0] =20 /* load input */ 0: vld1.32 {q0-q1}, [r1]! @@ -118,6 +118,6 @@ ENTRY(sha2_ce_transform) bne 0b =20 /* store new state */ vst1.32 {dga-dgb}, [r0] bx lr -ENDPROC(sha2_ce_transform) +ENDPROC(sha256_ce_transform) diff --git a/arch/arm/lib/crypto/sha256.c b/arch/arm/lib/crypto/sha256.c new file mode 100644 index 0000000000000..3a8dfc304807a --- /dev/null +++ b/arch/arm/lib/crypto/sha256.c @@ -0,0 +1,64 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +/* + * SHA-256 optimized for ARM + * + * Copyright 2025 Google LLC + */ +#include +#include +#include +#include +#include + +asmlinkage void sha256_block_data_order(u32 state[SHA256_STATE_WORDS], + const u8 *data, size_t nblocks); +asmlinkage void sha256_block_data_order_neon(u32 state[SHA256_STATE_WORDS], + const u8 *data, size_t nblocks); +asmlinkage void sha256_ce_transform(u32 state[SHA256_STATE_WORDS], + const u8 *data, size_t nblocks); + +static __ro_after_init DEFINE_STATIC_KEY_FALSE(have_neon); +static __ro_after_init DEFINE_STATIC_KEY_FALSE(have_ce); + +void sha256_blocks_arch(u32 state[SHA256_STATE_WORDS], + const u8 *data, size_t nblocks) +{ + if (IS_ENABLED(CONFIG_KERNEL_MODE_NEON) && + static_branch_likely(&have_neon) && crypto_simd_usable()) { + kernel_neon_begin(); + if (static_branch_likely(&have_ce)) + sha256_ce_transform(state, data, nblocks); + else + sha256_block_data_order_neon(state, data, nblocks); + kernel_neon_end(); + } else { + sha256_block_data_order(state, data, nblocks); + } +} +EXPORT_SYMBOL(sha256_blocks_arch); + +bool sha256_is_arch_optimized(void) +{ + /* We always can use at least the ARM scalar implementation. */ + return true; +} +EXPORT_SYMBOL(sha256_is_arch_optimized); + +static int __init sha256_arm_mod_init(void) +{ + if (IS_ENABLED(CONFIG_KERNEL_MODE_NEON) && (elf_hwcap & HWCAP_NEON)) { + static_branch_enable(&have_neon); + if (elf_hwcap2 & HWCAP2_SHA2) + static_branch_enable(&have_ce); + } + return 0; +} +arch_initcall(sha256_arm_mod_init); + +static void __exit sha256_arm_mod_exit(void) +{ +} +module_exit(sha256_arm_mod_exit); + +MODULE_LICENSE("GPL"); +MODULE_DESCRIPTION("SHA-256 optimized for ARM"); --=20 2.49.0 From nobody Sat Feb 7 05:57:37 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id AEED81E7640; Sat, 26 Apr 2025 06:51:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1745650301; cv=none; b=Cdf89e8gFxXAt3NsuMqnyMhyUVNasVk1ipRGmMxPB5T8yfLpcAQTpUkUW+f6m5rpHsZGcTGEcUne1AEf0i9IgRRLQhTbxXJHNzTQaoiHhkf385LgMZuJljSnd8/KKAElN6A9HMYM28KqXafV+RoYQGbzc7E0/2l5ejyN1iTNNqs= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1745650301; c=relaxed/simple; bh=tT7xHizMSeBb7KvPb+oh3/Ig96zej6yrrgzCapsgU4A=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=oYvZW1cIJ8DkXGw1Q7ywVb38mB2TR55K7KmxetIJ4nZhD8pcn9zEQf5puuc8x6mgaazmBH04mxZjQWK+dzr/ooxnhHzCxDG4GTfk+lfnQQmpq21beQ4sSc9LCHwqHk1ut30iT4ESZ6ITXDGIx/25qraLp2UPCjO1mlpVUXGvJuo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=Lcsr7CT6; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="Lcsr7CT6" Received: by smtp.kernel.org (Postfix) with ESMTPSA id BB467C4CEEB; Sat, 26 Apr 2025 06:51:40 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1745650301; bh=tT7xHizMSeBb7KvPb+oh3/Ig96zej6yrrgzCapsgU4A=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Lcsr7CT6H9XpOvDDW/C83SMoazZN1PRVMrqjz50cIA8jvmCUTp4hbSP/O3xWDYddd YZ0Ca/KBHzX6l/dLL8hxBuVetdDeZw6hpcDAZ35cOMN09OfSLYS0ycv+SBXsap2x9T nYa388Rpg9JgkkRno5+iBFIWegN8KKu5pmrB5RAANFCP2wNWIJ8cg4uo4UGSC1YmXX 3PyA35KurVu6MUW3T2QiEAViSJb9VVp4oVLGMgGOqouczpABo7xEpnV5PbcofFOcyJ SrKpi4wF6nF6wj1xAmeQyg10nxHDcqBjKLSEqqQyHj2PlfzO/+3AwSUvpeth2rtySI 75WbMMZSzlWhA== From: Eric Biggers To: linux-crypto@vger.kernel.org Cc: linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, sparclinux@vger.kernel.org, linux-s390@vger.kernel.org, x86@kernel.org, Ard Biesheuvel , "Jason A . Donenfeld " , Linus Torvalds Subject: [PATCH 03/13] crypto: arm64/sha256 - remove obsolete chunking logic Date: Fri, 25 Apr 2025 23:50:29 -0700 Message-ID: <20250426065041.1551914-4-ebiggers@kernel.org> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250426065041.1551914-1-ebiggers@kernel.org> References: <20250426065041.1551914-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Eric Biggers Since kernel-mode NEON sections are now preemptible on arm64, there is no longer any need to limit the length of them. Signed-off-by: Eric Biggers Reviewed-by: Ard Biesheuvel --- arch/arm64/crypto/sha256-glue.c | 19 ++----------------- 1 file changed, 2 insertions(+), 17 deletions(-) diff --git a/arch/arm64/crypto/sha256-glue.c b/arch/arm64/crypto/sha256-glu= e.c index 26f9fdfae87bf..d63ea82e1374e 100644 --- a/arch/arm64/crypto/sha256-glue.c +++ b/arch/arm64/crypto/sha256-glue.c @@ -84,27 +84,12 @@ static struct shash_alg algs[] =3D { { } }; =20 static int sha256_update_neon(struct shash_desc *desc, const u8 *data, unsigned int len) { - do { - unsigned int chunk =3D len; - - /* - * Don't hog the CPU for the entire time it takes to process all - * input when running on a preemptible kernel, but process the - * data block by block instead. - */ - if (IS_ENABLED(CONFIG_PREEMPTION)) - chunk =3D SHA256_BLOCK_SIZE; - - chunk -=3D sha256_base_do_update_blocks(desc, data, chunk, - sha256_neon_transform); - data +=3D chunk; - len -=3D chunk; - } while (len >=3D SHA256_BLOCK_SIZE); - return len; + return sha256_base_do_update_blocks(desc, data, len, + sha256_neon_transform); } =20 static int sha256_finup_neon(struct shash_desc *desc, const u8 *data, unsigned int len, u8 *out) { --=20 2.49.0 From nobody Sat Feb 7 05:57:37 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C0C471E7C3B; Sat, 26 Apr 2025 06:51:41 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1745650301; cv=none; b=HL9NRu5A4rqvLV9uknYAK44Vt5hopGabzAg0LIoVWMdxxQDzUiAjKLZKGwa2XhGHyVe62s9b0CZzMqc5EIBI7rdsZNHR2n86mzvZPZ3BE0GWCkxNRM8AvXP0qMYxcE5zJfolvvwGEv4005VD5U6IgXvJhoWEfG6+5qLkXRmNYZQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1745650301; c=relaxed/simple; bh=22jW5FAIeLJvUOajv+MdJpl+Q7OsXjuhdheBIH0/yWU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=KDEhugVDiGHjHkTxiWX70zBG7aqSVweO38O6EUqwGIzGssSH4xrTko3mTHFGIyX30grqqXNyKFxrg7WZD9DV3Z5astQeeI8VkNxSV1ChbUTnFbRbqOGAQbAgUO7JKD2HEHTFo2b73187d1sRoYYJlu4ZpZkQ9Fr+7YGPx3DCfmk= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=NEoSAZCZ; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="NEoSAZCZ" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 47EA7C4CEE2; Sat, 26 Apr 2025 06:51:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1745650301; bh=22jW5FAIeLJvUOajv+MdJpl+Q7OsXjuhdheBIH0/yWU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=NEoSAZCZfgV+HWTt8iCNCjvxp7wPneDZwic2rW9SuyHBprq1evu+FtX8hqaJllEmp dJXJmNsSJb6m5UMOE0DvPXVyLNb8RxRQrxZ/isnLeOXleX1GlTKprgMXzh5kgwrxwn Lq2S4hkYigIenhr7g/kPfiVbmM4m7b6xddFHd8mMNm6t2nx5UsCu8vPkgyQxz+gyjC l3LBCJFkyIBT+JdJyVSAklCpGZsWEQq6gfG/C8S35WXXZWw5alumO6FmKsWAo76AJc qIOQp764Ucvryej/qOD4xkE5ktG0lT0jaagIp/W0Ed/Lhxhwp4+aHMMwXzFDpaAeKX Bn085eHB9U3Tw== From: Eric Biggers To: linux-crypto@vger.kernel.org Cc: linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, sparclinux@vger.kernel.org, linux-s390@vger.kernel.org, x86@kernel.org, Ard Biesheuvel , "Jason A . Donenfeld " , Linus Torvalds Subject: [PATCH 04/13] crypto: arm64/sha256 - implement library instead of shash Date: Fri, 25 Apr 2025 23:50:30 -0700 Message-ID: <20250426065041.1551914-5-ebiggers@kernel.org> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250426065041.1551914-1-ebiggers@kernel.org> References: <20250426065041.1551914-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Eric Biggers Instead of providing crypto_shash algorithms for the arch-optimized SHA-256 code, instead implement the SHA-256 library. This is much simpler, it makes the SHA-256 library functions be arch-optimized, and it fixes the longstanding issue where the arch-optimized SHA-256 was disabled by default. SHA-256 still remains available through crypto_shash, but individual architectures no longer need to handle it. Remove support for SHA-256 finalization from the ARMv8 CE assembly code, since the library does not yet support architecture-specific overrides of the finalization. (Support for that has been omitted for now, for simplicity and because usually it isn't performance-critical.) To match sha256_blocks_arch(), change the type of the nblocks parameter of the assembly functions from int or 'unsigned int' to size_t. Update the ARMv8 CE assembly function accordingly. The scalar and NEON assembly functions actually already treated it as size_t. While renaming the assembly files, also fix the naming quirks where "sha2" meant sha256, and "sha512" meant both sha256 and sha512. Signed-off-by: Eric Biggers Reviewed-by: Ard Biesheuvel --- arch/arm64/configs/defconfig | 1 - arch/arm64/crypto/Kconfig | 19 --- arch/arm64/crypto/Makefile | 13 +- arch/arm64/crypto/sha2-ce-glue.c | 138 ---------------- arch/arm64/crypto/sha256-glue.c | 156 ------------------ arch/arm64/lib/crypto/.gitignore | 1 + arch/arm64/lib/crypto/Kconfig | 5 + arch/arm64/lib/crypto/Makefile | 9 +- .../crypto/sha2-armv8.pl} | 0 .../sha2-ce-core.S =3D> lib/crypto/sha256-ce.S} | 36 +--- arch/arm64/lib/crypto/sha256.c | 75 +++++++++ 11 files changed, 98 insertions(+), 355 deletions(-) delete mode 100644 arch/arm64/crypto/sha2-ce-glue.c delete mode 100644 arch/arm64/crypto/sha256-glue.c rename arch/arm64/{crypto/sha512-armv8.pl =3D> lib/crypto/sha2-armv8.pl} (= 100%) rename arch/arm64/{crypto/sha2-ce-core.S =3D> lib/crypto/sha256-ce.S} (80%) create mode 100644 arch/arm64/lib/crypto/sha256.c diff --git a/arch/arm64/configs/defconfig b/arch/arm64/configs/defconfig index 5bb8f09422a22..b0d4c7d173ea7 100644 --- a/arch/arm64/configs/defconfig +++ b/arch/arm64/configs/defconfig @@ -1735,11 +1735,10 @@ CONFIG_CRYPTO_MICHAEL_MIC=3Dm CONFIG_CRYPTO_ANSI_CPRNG=3Dy CONFIG_CRYPTO_USER_API_RNG=3Dm CONFIG_CRYPTO_CHACHA20_NEON=3Dm CONFIG_CRYPTO_GHASH_ARM64_CE=3Dy CONFIG_CRYPTO_SHA1_ARM64_CE=3Dy -CONFIG_CRYPTO_SHA2_ARM64_CE=3Dy CONFIG_CRYPTO_SHA512_ARM64_CE=3Dm CONFIG_CRYPTO_SHA3_ARM64=3Dm CONFIG_CRYPTO_SM3_ARM64_CE=3Dm CONFIG_CRYPTO_AES_ARM64_CE_BLK=3Dy CONFIG_CRYPTO_AES_ARM64_BS=3Dm diff --git a/arch/arm64/crypto/Kconfig b/arch/arm64/crypto/Kconfig index 55a7d87a67690..c44b0f202a1f5 100644 --- a/arch/arm64/crypto/Kconfig +++ b/arch/arm64/crypto/Kconfig @@ -34,29 +34,10 @@ config CRYPTO_SHA1_ARM64_CE SHA-1 secure hash algorithm (FIPS 180) =20 Architecture: arm64 using: - ARMv8 Crypto Extensions =20 -config CRYPTO_SHA256_ARM64 - tristate "Hash functions: SHA-224 and SHA-256" - select CRYPTO_HASH - help - SHA-224 and SHA-256 secure hash algorithms (FIPS 180) - - Architecture: arm64 - -config CRYPTO_SHA2_ARM64_CE - tristate "Hash functions: SHA-224 and SHA-256 (ARMv8 Crypto Extensions)" - depends on KERNEL_MODE_NEON - select CRYPTO_HASH - select CRYPTO_SHA256_ARM64 - help - SHA-224 and SHA-256 secure hash algorithms (FIPS 180) - - Architecture: arm64 using: - - ARMv8 Crypto Extensions - config CRYPTO_SHA512_ARM64 tristate "Hash functions: SHA-384 and SHA-512" select CRYPTO_HASH help SHA-384 and SHA-512 secure hash algorithms (FIPS 180) diff --git a/arch/arm64/crypto/Makefile b/arch/arm64/crypto/Makefile index 089ae3ddde810..c231c980c5142 100644 --- a/arch/arm64/crypto/Makefile +++ b/arch/arm64/crypto/Makefile @@ -6,13 +6,10 @@ # =20 obj-$(CONFIG_CRYPTO_SHA1_ARM64_CE) +=3D sha1-ce.o sha1-ce-y :=3D sha1-ce-glue.o sha1-ce-core.o =20 -obj-$(CONFIG_CRYPTO_SHA2_ARM64_CE) +=3D sha2-ce.o -sha2-ce-y :=3D sha2-ce-glue.o sha2-ce-core.o - obj-$(CONFIG_CRYPTO_SHA512_ARM64_CE) +=3D sha512-ce.o sha512-ce-y :=3D sha512-ce-glue.o sha512-ce-core.o =20 obj-$(CONFIG_CRYPTO_SHA3_ARM64) +=3D sha3-ce.o sha3-ce-y :=3D sha3-ce-glue.o sha3-ce-core.o @@ -54,13 +51,10 @@ obj-$(CONFIG_CRYPTO_AES_ARM64_CE_BLK) +=3D aes-ce-blk.o aes-ce-blk-y :=3D aes-glue-ce.o aes-ce.o =20 obj-$(CONFIG_CRYPTO_AES_ARM64_NEON_BLK) +=3D aes-neon-blk.o aes-neon-blk-y :=3D aes-glue-neon.o aes-neon.o =20 -obj-$(CONFIG_CRYPTO_SHA256_ARM64) +=3D sha256-arm64.o -sha256-arm64-y :=3D sha256-glue.o sha256-core.o - obj-$(CONFIG_CRYPTO_SHA512_ARM64) +=3D sha512-arm64.o sha512-arm64-y :=3D sha512-glue.o sha512-core.o =20 obj-$(CONFIG_CRYPTO_NHPOLY1305_NEON) +=3D nhpoly1305-neon.o nhpoly1305-neon-y :=3D nh-neon-core.o nhpoly1305-neon-glue.o @@ -72,12 +66,9 @@ obj-$(CONFIG_CRYPTO_AES_ARM64_BS) +=3D aes-neon-bs.o aes-neon-bs-y :=3D aes-neonbs-core.o aes-neonbs-glue.o =20 quiet_cmd_perlasm =3D PERLASM $@ cmd_perlasm =3D $(PERL) $(<) void $(@) =20 -$(obj)/%-core.S: $(src)/%-armv8.pl - $(call cmd,perlasm) - -$(obj)/sha256-core.S: $(src)/sha512-armv8.pl +$(obj)/sha512-core.S: $(src)/../lib/crypto/sha2-armv8.pl $(call cmd,perlasm) =20 -clean-files +=3D sha256-core.S sha512-core.S +clean-files +=3D sha512-core.S diff --git a/arch/arm64/crypto/sha2-ce-glue.c b/arch/arm64/crypto/sha2-ce-g= lue.c deleted file mode 100644 index 912c215101eb1..0000000000000 --- a/arch/arm64/crypto/sha2-ce-glue.c +++ /dev/null @@ -1,138 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0-only -/* - * sha2-ce-glue.c - SHA-224/SHA-256 using ARMv8 Crypto Extensions - * - * Copyright (C) 2014 - 2017 Linaro Ltd - */ - -#include -#include -#include -#include -#include -#include -#include -#include - -MODULE_DESCRIPTION("SHA-224/SHA-256 secure hash using ARMv8 Crypto Extensi= ons"); -MODULE_AUTHOR("Ard Biesheuvel "); -MODULE_LICENSE("GPL v2"); -MODULE_ALIAS_CRYPTO("sha224"); -MODULE_ALIAS_CRYPTO("sha256"); - -struct sha256_ce_state { - struct crypto_sha256_state sst; - u32 finalize; -}; - -extern const u32 sha256_ce_offsetof_count; -extern const u32 sha256_ce_offsetof_finalize; - -asmlinkage int __sha256_ce_transform(struct sha256_ce_state *sst, u8 const= *src, - int blocks); - -static void sha256_ce_transform(struct crypto_sha256_state *sst, u8 const = *src, - int blocks) -{ - while (blocks) { - int rem; - - kernel_neon_begin(); - rem =3D __sha256_ce_transform(container_of(sst, - struct sha256_ce_state, - sst), src, blocks); - kernel_neon_end(); - src +=3D (blocks - rem) * SHA256_BLOCK_SIZE; - blocks =3D rem; - } -} - -const u32 sha256_ce_offsetof_count =3D offsetof(struct sha256_ce_state, - sst.count); -const u32 sha256_ce_offsetof_finalize =3D offsetof(struct sha256_ce_state, - finalize); - -static int sha256_ce_update(struct shash_desc *desc, const u8 *data, - unsigned int len) -{ - struct sha256_ce_state *sctx =3D shash_desc_ctx(desc); - - sctx->finalize =3D 0; - return sha256_base_do_update_blocks(desc, data, len, - sha256_ce_transform); -} - -static int sha256_ce_finup(struct shash_desc *desc, const u8 *data, - unsigned int len, u8 *out) -{ - struct sha256_ce_state *sctx =3D shash_desc_ctx(desc); - bool finalize =3D !(len % SHA256_BLOCK_SIZE) && len; - - /* - * Allow the asm code to perform the finalization if there is no - * partial data and the input is a round multiple of the block size. - */ - sctx->finalize =3D finalize; - - if (finalize) - sha256_base_do_update_blocks(desc, data, len, - sha256_ce_transform); - else - sha256_base_do_finup(desc, data, len, sha256_ce_transform); - return sha256_base_finish(desc, out); -} - -static int sha256_ce_digest(struct shash_desc *desc, const u8 *data, - unsigned int len, u8 *out) -{ - sha256_base_init(desc); - return sha256_ce_finup(desc, data, len, out); -} - -static struct shash_alg algs[] =3D { { - .init =3D sha224_base_init, - .update =3D sha256_ce_update, - .finup =3D sha256_ce_finup, - .descsize =3D sizeof(struct sha256_ce_state), - .statesize =3D sizeof(struct crypto_sha256_state), - .digestsize =3D SHA224_DIGEST_SIZE, - .base =3D { - .cra_name =3D "sha224", - .cra_driver_name =3D "sha224-ce", - .cra_priority =3D 200, - .cra_flags =3D CRYPTO_AHASH_ALG_BLOCK_ONLY | - CRYPTO_AHASH_ALG_FINUP_MAX, - .cra_blocksize =3D SHA256_BLOCK_SIZE, - .cra_module =3D THIS_MODULE, - } -}, { - .init =3D sha256_base_init, - .update =3D sha256_ce_update, - .finup =3D sha256_ce_finup, - .digest =3D sha256_ce_digest, - .descsize =3D sizeof(struct sha256_ce_state), - .statesize =3D sizeof(struct crypto_sha256_state), - .digestsize =3D SHA256_DIGEST_SIZE, - .base =3D { - .cra_name =3D "sha256", - .cra_driver_name =3D "sha256-ce", - .cra_priority =3D 200, - .cra_flags =3D CRYPTO_AHASH_ALG_BLOCK_ONLY | - CRYPTO_AHASH_ALG_FINUP_MAX, - .cra_blocksize =3D SHA256_BLOCK_SIZE, - .cra_module =3D THIS_MODULE, - } -} }; - -static int __init sha2_ce_mod_init(void) -{ - return crypto_register_shashes(algs, ARRAY_SIZE(algs)); -} - -static void __exit sha2_ce_mod_fini(void) -{ - crypto_unregister_shashes(algs, ARRAY_SIZE(algs)); -} - -module_cpu_feature_match(SHA2, sha2_ce_mod_init); -module_exit(sha2_ce_mod_fini); diff --git a/arch/arm64/crypto/sha256-glue.c b/arch/arm64/crypto/sha256-glu= e.c deleted file mode 100644 index d63ea82e1374e..0000000000000 --- a/arch/arm64/crypto/sha256-glue.c +++ /dev/null @@ -1,156 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0-or-later -/* - * Linux/arm64 port of the OpenSSL SHA256 implementation for AArch64 - * - * Copyright (c) 2016 Linaro Ltd. - */ - -#include -#include -#include -#include -#include -#include -#include - -MODULE_DESCRIPTION("SHA-224/SHA-256 secure hash for arm64"); -MODULE_AUTHOR("Andy Polyakov "); -MODULE_AUTHOR("Ard Biesheuvel "); -MODULE_LICENSE("GPL v2"); -MODULE_ALIAS_CRYPTO("sha224"); -MODULE_ALIAS_CRYPTO("sha256"); - -asmlinkage void sha256_block_data_order(u32 *digest, const void *data, - unsigned int num_blks); -EXPORT_SYMBOL(sha256_block_data_order); - -static void sha256_arm64_transform(struct crypto_sha256_state *sst, - u8 const *src, int blocks) -{ - sha256_block_data_order(sst->state, src, blocks); -} - -asmlinkage void sha256_block_neon(u32 *digest, const void *data, - unsigned int num_blks); - -static void sha256_neon_transform(struct crypto_sha256_state *sst, - u8 const *src, int blocks) -{ - kernel_neon_begin(); - sha256_block_neon(sst->state, src, blocks); - kernel_neon_end(); -} - -static int crypto_sha256_arm64_update(struct shash_desc *desc, const u8 *d= ata, - unsigned int len) -{ - return sha256_base_do_update_blocks(desc, data, len, - sha256_arm64_transform); -} - -static int crypto_sha256_arm64_finup(struct shash_desc *desc, const u8 *da= ta, - unsigned int len, u8 *out) -{ - sha256_base_do_finup(desc, data, len, sha256_arm64_transform); - return sha256_base_finish(desc, out); -} - -static struct shash_alg algs[] =3D { { - .digestsize =3D SHA256_DIGEST_SIZE, - .init =3D sha256_base_init, - .update =3D crypto_sha256_arm64_update, - .finup =3D crypto_sha256_arm64_finup, - .descsize =3D sizeof(struct crypto_sha256_state), - .base.cra_name =3D "sha256", - .base.cra_driver_name =3D "sha256-arm64", - .base.cra_priority =3D 125, - .base.cra_flags =3D CRYPTO_AHASH_ALG_BLOCK_ONLY | - CRYPTO_AHASH_ALG_FINUP_MAX, - .base.cra_blocksize =3D SHA256_BLOCK_SIZE, - .base.cra_module =3D THIS_MODULE, -}, { - .digestsize =3D SHA224_DIGEST_SIZE, - .init =3D sha224_base_init, - .update =3D crypto_sha256_arm64_update, - .finup =3D crypto_sha256_arm64_finup, - .descsize =3D sizeof(struct crypto_sha256_state), - .base.cra_name =3D "sha224", - .base.cra_driver_name =3D "sha224-arm64", - .base.cra_priority =3D 125, - .base.cra_flags =3D CRYPTO_AHASH_ALG_BLOCK_ONLY | - CRYPTO_AHASH_ALG_FINUP_MAX, - .base.cra_blocksize =3D SHA224_BLOCK_SIZE, - .base.cra_module =3D THIS_MODULE, -} }; - -static int sha256_update_neon(struct shash_desc *desc, const u8 *data, - unsigned int len) -{ - return sha256_base_do_update_blocks(desc, data, len, - sha256_neon_transform); -} - -static int sha256_finup_neon(struct shash_desc *desc, const u8 *data, - unsigned int len, u8 *out) -{ - if (len >=3D SHA256_BLOCK_SIZE) { - int remain =3D sha256_update_neon(desc, data, len); - - data +=3D len - remain; - len =3D remain; - } - sha256_base_do_finup(desc, data, len, sha256_neon_transform); - return sha256_base_finish(desc, out); -} - -static struct shash_alg neon_algs[] =3D { { - .digestsize =3D SHA256_DIGEST_SIZE, - .init =3D sha256_base_init, - .update =3D sha256_update_neon, - .finup =3D sha256_finup_neon, - .descsize =3D sizeof(struct crypto_sha256_state), - .base.cra_name =3D "sha256", - .base.cra_driver_name =3D "sha256-arm64-neon", - .base.cra_priority =3D 150, - .base.cra_flags =3D CRYPTO_AHASH_ALG_BLOCK_ONLY | - CRYPTO_AHASH_ALG_FINUP_MAX, - .base.cra_blocksize =3D SHA256_BLOCK_SIZE, - .base.cra_module =3D THIS_MODULE, -}, { - .digestsize =3D SHA224_DIGEST_SIZE, - .init =3D sha224_base_init, - .update =3D sha256_update_neon, - .finup =3D sha256_finup_neon, - .descsize =3D sizeof(struct crypto_sha256_state), - .base.cra_name =3D "sha224", - .base.cra_driver_name =3D "sha224-arm64-neon", - .base.cra_priority =3D 150, - .base.cra_flags =3D CRYPTO_AHASH_ALG_BLOCK_ONLY | - CRYPTO_AHASH_ALG_FINUP_MAX, - .base.cra_blocksize =3D SHA224_BLOCK_SIZE, - .base.cra_module =3D THIS_MODULE, -} }; - -static int __init sha256_mod_init(void) -{ - int ret =3D crypto_register_shashes(algs, ARRAY_SIZE(algs)); - if (ret) - return ret; - - if (cpu_have_named_feature(ASIMD)) { - ret =3D crypto_register_shashes(neon_algs, ARRAY_SIZE(neon_algs)); - if (ret) - crypto_unregister_shashes(algs, ARRAY_SIZE(algs)); - } - return ret; -} - -static void __exit sha256_mod_fini(void) -{ - if (cpu_have_named_feature(ASIMD)) - crypto_unregister_shashes(neon_algs, ARRAY_SIZE(neon_algs)); - crypto_unregister_shashes(algs, ARRAY_SIZE(algs)); -} - -module_init(sha256_mod_init); -module_exit(sha256_mod_fini); diff --git a/arch/arm64/lib/crypto/.gitignore b/arch/arm64/lib/crypto/.giti= gnore index 0d47d4f21c6de..12d74d8b03d0a 100644 --- a/arch/arm64/lib/crypto/.gitignore +++ b/arch/arm64/lib/crypto/.gitignore @@ -1,2 +1,3 @@ # SPDX-License-Identifier: GPL-2.0-only poly1305-core.S +sha256-core.S diff --git a/arch/arm64/lib/crypto/Kconfig b/arch/arm64/lib/crypto/Kconfig index 0b903ef524d85..49e57bfdb5b52 100644 --- a/arch/arm64/lib/crypto/Kconfig +++ b/arch/arm64/lib/crypto/Kconfig @@ -10,5 +10,10 @@ config CRYPTO_CHACHA20_NEON config CRYPTO_POLY1305_NEON tristate depends on KERNEL_MODE_NEON default CRYPTO_LIB_POLY1305 select CRYPTO_ARCH_HAVE_LIB_POLY1305 + +config CRYPTO_SHA256_ARM64 + tristate + default CRYPTO_LIB_SHA256 + select CRYPTO_ARCH_HAVE_LIB_SHA256 diff --git a/arch/arm64/lib/crypto/Makefile b/arch/arm64/lib/crypto/Makefile index ac624c3effdaf..141efe07155c0 100644 --- a/arch/arm64/lib/crypto/Makefile +++ b/arch/arm64/lib/crypto/Makefile @@ -5,12 +5,19 @@ chacha-neon-y :=3D chacha-neon-core.o chacha-neon-glue.o =20 obj-$(CONFIG_CRYPTO_POLY1305_NEON) +=3D poly1305-neon.o poly1305-neon-y :=3D poly1305-core.o poly1305-glue.o AFLAGS_poly1305-core.o +=3D -Dpoly1305_init=3Dpoly1305_init_arm64 =20 +obj-$(CONFIG_CRYPTO_SHA256_ARM64) +=3D sha256-arm64.o +sha256-arm64-y :=3D sha256.o sha256-core.o +sha256-arm64-$(CONFIG_KERNEL_MODE_NEON) +=3D sha256-ce.o + quiet_cmd_perlasm =3D PERLASM $@ cmd_perlasm =3D $(PERL) $(<) void $(@) =20 $(obj)/%-core.S: $(src)/%-armv8.pl $(call cmd,perlasm) =20 -clean-files +=3D poly1305-core.S +$(obj)/sha256-core.S: $(src)/sha2-armv8.pl + $(call cmd,perlasm) + +clean-files +=3D poly1305-core.S sha256-core.S diff --git a/arch/arm64/crypto/sha512-armv8.pl b/arch/arm64/lib/crypto/sha2= -armv8.pl similarity index 100% rename from arch/arm64/crypto/sha512-armv8.pl rename to arch/arm64/lib/crypto/sha2-armv8.pl diff --git a/arch/arm64/crypto/sha2-ce-core.S b/arch/arm64/lib/crypto/sha25= 6-ce.S similarity index 80% rename from arch/arm64/crypto/sha2-ce-core.S rename to arch/arm64/lib/crypto/sha256-ce.S index fce84d88ddb2c..a8461d6dad634 100644 --- a/arch/arm64/crypto/sha2-ce-core.S +++ b/arch/arm64/lib/crypto/sha256-ce.S @@ -69,12 +69,12 @@ .word 0x391c0cb3, 0x4ed8aa4a, 0x5b9cca4f, 0x682e6ff3 .word 0x748f82ee, 0x78a5636f, 0x84c87814, 0x8cc70208 .word 0x90befffa, 0xa4506ceb, 0xbef9a3f7, 0xc67178f2 =20 /* - * int __sha256_ce_transform(struct sha256_ce_state *sst, u8 const *src, - * int blocks) + * size_t __sha256_ce_transform(u32 state[SHA256_STATE_WORDS], + * const u8 *data, size_t nblocks); */ .text SYM_FUNC_START(__sha256_ce_transform) /* load round constants */ adr_l x8, .Lsha2_rcon @@ -84,24 +84,20 @@ SYM_FUNC_START(__sha256_ce_transform) ld1 {v12.4s-v15.4s}, [x8] =20 /* load state */ ld1 {dgav.4s, dgbv.4s}, [x0] =20 - /* load sha256_ce_state::finalize */ - ldr_l w4, sha256_ce_offsetof_finalize, x4 - ldr w4, [x0, x4] - /* load input */ 0: ld1 {v16.4s-v19.4s}, [x1], #64 - sub w2, w2, #1 + sub x2, x2, #1 =20 CPU_LE( rev32 v16.16b, v16.16b ) CPU_LE( rev32 v17.16b, v17.16b ) CPU_LE( rev32 v18.16b, v18.16b ) CPU_LE( rev32 v19.16b, v19.16b ) =20 -1: add t0.4s, v16.4s, v0.4s + add t0.4s, v16.4s, v0.4s mov dg0v.16b, dgav.16b mov dg1v.16b, dgbv.16b =20 add_update 0, v1, 16, 17, 18, 19 add_update 1, v2, 17, 18, 19, 16 @@ -126,32 +122,14 @@ CPU_LE( rev32 v19.16b, v19.16b ) /* update state */ add dgav.4s, dgav.4s, dg0v.4s add dgbv.4s, dgbv.4s, dg1v.4s =20 /* handled all input blocks? */ - cbz w2, 2f + cbz x2, 1f cond_yield 3f, x5, x6 b 0b =20 - /* - * Final block: add padding and total bit count. - * Skip if the input size was not a round multiple of the block size, - * the padding is handled by the C code in that case. - */ -2: cbz x4, 3f - ldr_l w4, sha256_ce_offsetof_count, x4 - ldr x4, [x0, x4] - movi v17.2d, #0 - mov x8, #0x80000000 - movi v18.2d, #0 - ror x7, x4, #29 // ror(lsl(x4, 3), 32) - fmov d16, x8 - mov x4, #0 - mov v19.d[0], xzr - mov v19.d[1], x7 - b 1b - /* store new state */ -3: st1 {dgav.4s, dgbv.4s}, [x0] - mov w0, w2 +1: st1 {dgav.4s, dgbv.4s}, [x0] + mov x0, x2 ret SYM_FUNC_END(__sha256_ce_transform) diff --git a/arch/arm64/lib/crypto/sha256.c b/arch/arm64/lib/crypto/sha256.c new file mode 100644 index 0000000000000..2bd413c586d27 --- /dev/null +++ b/arch/arm64/lib/crypto/sha256.c @@ -0,0 +1,75 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +/* + * SHA-256 optimized for ARM64 + * + * Copyright 2025 Google LLC + */ +#include +#include +#include +#include +#include + +asmlinkage void sha256_block_data_order(u32 state[SHA256_STATE_WORDS], + const u8 *data, size_t nblocks); +asmlinkage void sha256_block_neon(u32 state[SHA256_STATE_WORDS], + const u8 *data, size_t nblocks); +asmlinkage size_t __sha256_ce_transform(u32 state[SHA256_STATE_WORDS], + const u8 *data, size_t nblocks); + +static __ro_after_init DEFINE_STATIC_KEY_FALSE(have_neon); +static __ro_after_init DEFINE_STATIC_KEY_FALSE(have_ce); + +void sha256_blocks_arch(u32 state[SHA256_STATE_WORDS], + const u8 *data, size_t nblocks) +{ + if (IS_ENABLED(CONFIG_KERNEL_MODE_NEON) && + static_branch_likely(&have_neon) && crypto_simd_usable()) { + if (static_branch_likely(&have_ce)) { + do { + size_t rem; + + kernel_neon_begin(); + rem =3D __sha256_ce_transform(state, + data, nblocks); + kernel_neon_end(); + data +=3D (nblocks - rem) * SHA256_BLOCK_SIZE; + nblocks =3D rem; + } while (nblocks); + } else { + kernel_neon_begin(); + sha256_block_neon(state, data, nblocks); + kernel_neon_end(); + } + } else { + sha256_block_data_order(state, data, nblocks); + } +} +EXPORT_SYMBOL(sha256_blocks_arch); + +bool sha256_is_arch_optimized(void) +{ + /* We always can use at least the ARM64 scalar implementation. */ + return true; +} +EXPORT_SYMBOL(sha256_is_arch_optimized); + +static int __init sha256_arm64_mod_init(void) +{ + if (IS_ENABLED(CONFIG_KERNEL_MODE_NEON) && + cpu_have_named_feature(ASIMD)) { + static_branch_enable(&have_neon); + if (cpu_have_named_feature(SHA2)) + static_branch_enable(&have_ce); + } + return 0; +} +arch_initcall(sha256_arm64_mod_init); + +static void __exit sha256_arm64_mod_exit(void) +{ +} +module_exit(sha256_arm64_mod_exit); + +MODULE_LICENSE("GPL"); +MODULE_DESCRIPTION("SHA-256 optimized for ARM64"); --=20 2.49.0 From nobody Sat Feb 7 05:57:37 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 541DD1F1908; Sat, 26 Apr 2025 06:51:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1745650302; cv=none; b=eQDW+iKE6XPqzvArd4/IsTzxU4HOkAHKkU3HNSfOwmd2B/w7pB9bq8NjlKiog+bPrs54rcXmP0n8tsKurLnbnIt2RiIg7vqEW89RWD9LGp73HHea23NZtM2oQKkc5jv6XATNg5vMuY/LWy0oGF0pcU/2/z1m6gioaN+/FStOBaI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1745650302; c=relaxed/simple; bh=bGsC0u7uweX344w5/PsMWDq9aDk+1xpB1l7zPJURYiE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=OoMY8426nfsaoyoBZ6QQQiypAyjFoQiE/quykV0TxKDIChrKmqfzylpNh/cPZyM+sKYHEIJG2RL/wIRPWwtniu1lCLgt4er89wAG8LcBkuhRlQItt60QE3csu0HEWInN9IBnVbB+xxMSLZX2mNcYUkXSG/p+nOfr7qm+GRKPbco= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=hw8SYKNS; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="hw8SYKNS" Received: by smtp.kernel.org (Postfix) with ESMTPSA id C811CC4CEE9; Sat, 26 Apr 2025 06:51:41 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1745650302; bh=bGsC0u7uweX344w5/PsMWDq9aDk+1xpB1l7zPJURYiE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=hw8SYKNS3BoLfBRN0OGD9sHZ5XNJQLQUfs0LubQ355rGxYHrZU4Cirrjz3MasvQ04 zLRIfUTngq11B1CwicntnLhD3d+Lzzyx3oERrREVpk4pEOxyrceSt0WXY8JmuXiz8a 5MVg/mfX1RBtqwu19TtAAMiMbRGyH5xxFSi16pARgPEoE36Fy6hsSpt/fKEXV17oa8 KuOG3gvw2sAppOFEWa7IXZwyrrpXRX+POUa09nEI142kLyVo6k+Hh2ex+j+4ANaHEt EHs4yqsqxjkSDuEYpF06nxs4bFq1a42SpB2+M8lTkuspDJXVut0qDCp86xSBBfoiCh 6MiujfrchnF/g== From: Eric Biggers To: linux-crypto@vger.kernel.org Cc: linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, sparclinux@vger.kernel.org, linux-s390@vger.kernel.org, x86@kernel.org, Ard Biesheuvel , "Jason A . Donenfeld " , Linus Torvalds Subject: [PATCH 05/13] crypto: mips/sha256 - implement library instead of shash Date: Fri, 25 Apr 2025 23:50:31 -0700 Message-ID: <20250426065041.1551914-6-ebiggers@kernel.org> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250426065041.1551914-1-ebiggers@kernel.org> References: <20250426065041.1551914-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Eric Biggers Instead of providing crypto_shash algorithms for the arch-optimized SHA-256 code, instead implement the SHA-256 library. This is much simpler, it makes the SHA-256 library functions be arch-optimized, and it fixes the longstanding issue where the arch-optimized SHA-256 was disabled by default. SHA-256 still remains available through crypto_shash, but individual architectures no longer need to handle it. Signed-off-by: Eric Biggers --- arch/mips/cavium-octeon/Kconfig | 6 + .../mips/cavium-octeon/crypto/octeon-sha256.c | 135 ++++-------------- arch/mips/configs/cavium_octeon_defconfig | 1 - arch/mips/crypto/Kconfig | 10 -- 4 files changed, 33 insertions(+), 119 deletions(-) diff --git a/arch/mips/cavium-octeon/Kconfig b/arch/mips/cavium-octeon/Kcon= fig index 450e979ef5d93..11f4aa6e80e9b 100644 --- a/arch/mips/cavium-octeon/Kconfig +++ b/arch/mips/cavium-octeon/Kconfig @@ -21,10 +21,16 @@ config CAVIUM_OCTEON_CVMSEG_SIZE local memory; the larger CVMSEG is, the smaller the cache is. This selects the size of CVMSEG LM, which is in cache blocks. The legally range is from zero to 54 cache blocks (i.e. CVMSEG LM is between zero and 6192 bytes). =20 +config CRYPTO_SHA256_OCTEON + tristate + default CRYPTO_LIB_SHA256 + select CRYPTO_ARCH_HAVE_LIB_SHA256 + select CRYPTO_LIB_SHA256_GENERIC + endif # CPU_CAVIUM_OCTEON =20 if CAVIUM_OCTEON_SOC =20 config CAVIUM_OCTEON_LOCK_L2 diff --git a/arch/mips/cavium-octeon/crypto/octeon-sha256.c b/arch/mips/cav= ium-octeon/crypto/octeon-sha256.c index 8e85ea65387c8..f169054852bcb 100644 --- a/arch/mips/cavium-octeon/crypto/octeon-sha256.c +++ b/arch/mips/cavium-octeon/crypto/octeon-sha256.c @@ -1,10 +1,8 @@ // SPDX-License-Identifier: GPL-2.0-or-later /* - * Cryptographic API. - * - * SHA-224 and SHA-256 Secure Hash Algorithm. + * SHA-256 Secure Hash Algorithm. * * Adapted for OCTEON by Aaro Koskinen . * * Based on crypto/sha256_generic.c, which is: * @@ -13,142 +11,63 @@ * Copyright (c) 2002 James Morris * SHA224 Support Copyright 2007 Intel Corporation */ =20 #include -#include -#include -#include +#include #include #include =20 #include "octeon-crypto.h" =20 /* * We pass everything as 64-bit. OCTEON can handle misaligned data. */ =20 -static void octeon_sha256_store_hash(struct crypto_sha256_state *sctx) +void sha256_blocks_arch(u32 state[SHA256_STATE_WORDS], + const u8 *data, size_t nblocks) { - u64 *hash =3D (u64 *)sctx->state; - - write_octeon_64bit_hash_dword(hash[0], 0); - write_octeon_64bit_hash_dword(hash[1], 1); - write_octeon_64bit_hash_dword(hash[2], 2); - write_octeon_64bit_hash_dword(hash[3], 3); -} + struct octeon_cop2_state cop2_state; + u64 *state64 =3D (u64 *)state; + unsigned long flags; =20 -static void octeon_sha256_read_hash(struct crypto_sha256_state *sctx) -{ - u64 *hash =3D (u64 *)sctx->state; + if (!octeon_has_crypto()) + return sha256_blocks_generic(state, data, nblocks); =20 - hash[0] =3D read_octeon_64bit_hash_dword(0); - hash[1] =3D read_octeon_64bit_hash_dword(1); - hash[2] =3D read_octeon_64bit_hash_dword(2); - hash[3] =3D read_octeon_64bit_hash_dword(3); -} + flags =3D octeon_crypto_enable(&cop2_state); + write_octeon_64bit_hash_dword(state64[0], 0); + write_octeon_64bit_hash_dword(state64[1], 1); + write_octeon_64bit_hash_dword(state64[2], 2); + write_octeon_64bit_hash_dword(state64[3], 3); =20 -static void octeon_sha256_transform(struct crypto_sha256_state *sctx, - const u8 *src, int blocks) -{ do { - const u64 *block =3D (const u64 *)src; + const u64 *block =3D (const u64 *)data; =20 write_octeon_64bit_block_dword(block[0], 0); write_octeon_64bit_block_dword(block[1], 1); write_octeon_64bit_block_dword(block[2], 2); write_octeon_64bit_block_dword(block[3], 3); write_octeon_64bit_block_dword(block[4], 4); write_octeon_64bit_block_dword(block[5], 5); write_octeon_64bit_block_dword(block[6], 6); octeon_sha256_start(block[7]); =20 - src +=3D SHA256_BLOCK_SIZE; - } while (--blocks); -} - -static int octeon_sha256_update(struct shash_desc *desc, const u8 *data, - unsigned int len) -{ - struct crypto_sha256_state *sctx =3D shash_desc_ctx(desc); - struct octeon_cop2_state state; - unsigned long flags; - int remain; - - flags =3D octeon_crypto_enable(&state); - octeon_sha256_store_hash(sctx); - - remain =3D sha256_base_do_update_blocks(desc, data, len, - octeon_sha256_transform); + data +=3D SHA256_BLOCK_SIZE; + } while (--nblocks); =20 - octeon_sha256_read_hash(sctx); - octeon_crypto_disable(&state, flags); - return remain; + state64[0] =3D read_octeon_64bit_hash_dword(0); + state64[1] =3D read_octeon_64bit_hash_dword(1); + state64[2] =3D read_octeon_64bit_hash_dword(2); + state64[3] =3D read_octeon_64bit_hash_dword(3); + octeon_crypto_disable(&cop2_state, flags); } +EXPORT_SYMBOL(sha256_blocks_arch); =20 -static int octeon_sha256_finup(struct shash_desc *desc, const u8 *src, - unsigned int len, u8 *out) +bool sha256_is_arch_optimized(void) { - struct crypto_sha256_state *sctx =3D shash_desc_ctx(desc); - struct octeon_cop2_state state; - unsigned long flags; - - flags =3D octeon_crypto_enable(&state); - octeon_sha256_store_hash(sctx); - - sha256_base_do_finup(desc, src, len, octeon_sha256_transform); - - octeon_sha256_read_hash(sctx); - octeon_crypto_disable(&state, flags); - return sha256_base_finish(desc, out); + return octeon_has_crypto(); } - -static struct shash_alg octeon_sha256_algs[2] =3D { { - .digestsize =3D SHA256_DIGEST_SIZE, - .init =3D sha256_base_init, - .update =3D octeon_sha256_update, - .finup =3D octeon_sha256_finup, - .descsize =3D sizeof(struct crypto_sha256_state), - .base =3D { - .cra_name =3D "sha256", - .cra_driver_name=3D "octeon-sha256", - .cra_priority =3D OCTEON_CR_OPCODE_PRIORITY, - .cra_flags =3D CRYPTO_AHASH_ALG_BLOCK_ONLY, - .cra_blocksize =3D SHA256_BLOCK_SIZE, - .cra_module =3D THIS_MODULE, - } -}, { - .digestsize =3D SHA224_DIGEST_SIZE, - .init =3D sha224_base_init, - .update =3D octeon_sha256_update, - .finup =3D octeon_sha256_finup, - .descsize =3D sizeof(struct crypto_sha256_state), - .base =3D { - .cra_name =3D "sha224", - .cra_driver_name=3D "octeon-sha224", - .cra_priority =3D OCTEON_CR_OPCODE_PRIORITY, - .cra_flags =3D CRYPTO_AHASH_ALG_BLOCK_ONLY, - .cra_blocksize =3D SHA224_BLOCK_SIZE, - .cra_module =3D THIS_MODULE, - } -} }; - -static int __init octeon_sha256_mod_init(void) -{ - if (!octeon_has_crypto()) - return -ENOTSUPP; - return crypto_register_shashes(octeon_sha256_algs, - ARRAY_SIZE(octeon_sha256_algs)); -} - -static void __exit octeon_sha256_mod_fini(void) -{ - crypto_unregister_shashes(octeon_sha256_algs, - ARRAY_SIZE(octeon_sha256_algs)); -} - -module_init(octeon_sha256_mod_init); -module_exit(octeon_sha256_mod_fini); +EXPORT_SYMBOL(sha256_is_arch_optimized); =20 MODULE_LICENSE("GPL"); -MODULE_DESCRIPTION("SHA-224 and SHA-256 Secure Hash Algorithm (OCTEON)"); +MODULE_DESCRIPTION("SHA-256 Secure Hash Algorithm (OCTEON)"); MODULE_AUTHOR("Aaro Koskinen "); diff --git a/arch/mips/configs/cavium_octeon_defconfig b/arch/mips/configs/= cavium_octeon_defconfig index f523ee6f25bfe..88ae0aa85364b 100644 --- a/arch/mips/configs/cavium_octeon_defconfig +++ b/arch/mips/configs/cavium_octeon_defconfig @@ -155,11 +155,10 @@ CONFIG_SECURITY=3Dy CONFIG_SECURITY_NETWORK=3Dy CONFIG_CRYPTO_CBC=3Dy CONFIG_CRYPTO_HMAC=3Dy CONFIG_CRYPTO_MD5_OCTEON=3Dy CONFIG_CRYPTO_SHA1_OCTEON=3Dm -CONFIG_CRYPTO_SHA256_OCTEON=3Dm CONFIG_CRYPTO_SHA512_OCTEON=3Dm CONFIG_CRYPTO_DES=3Dy CONFIG_DEBUG_INFO_DWARF_TOOLCHAIN_DEFAULT=3Dy CONFIG_DEBUG_FS=3Dy CONFIG_MAGIC_SYSRQ=3Dy diff --git a/arch/mips/crypto/Kconfig b/arch/mips/crypto/Kconfig index 9db1fd6d9f0e0..6bf073ae7613f 100644 --- a/arch/mips/crypto/Kconfig +++ b/arch/mips/crypto/Kconfig @@ -20,20 +20,10 @@ config CRYPTO_SHA1_OCTEON help SHA-1 secure hash algorithm (FIPS 180) =20 Architecture: mips OCTEON =20 -config CRYPTO_SHA256_OCTEON - tristate "Hash functions: SHA-224 and SHA-256 (OCTEON)" - depends on CPU_CAVIUM_OCTEON - select CRYPTO_SHA256 - select CRYPTO_HASH - help - SHA-224 and SHA-256 secure hash algorithms (FIPS 180) - - Architecture: mips OCTEON using crypto instructions, when available - config CRYPTO_SHA512_OCTEON tristate "Hash functions: SHA-384 and SHA-512 (OCTEON)" depends on CPU_CAVIUM_OCTEON select CRYPTO_SHA512 select CRYPTO_HASH --=20 2.49.0 From nobody Sat Feb 7 05:57:37 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3ED361F8937; Sat, 26 Apr 2025 06:51:42 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1745650304; cv=none; b=Gr66yNrRy/bhRtbdGmCwxXZzEiUl8Pu+0p3k4HTauknIDncFkp9dnHyUhHTgTJMOiQ9ABs5pIAjXi/uCheImNInRhg+Ii7V1tEN4V5F4l9M+GuXPG86tM2ybpablHDaZMjoCk297apzAS5xoz21JnOXLrXu2pe6ulLzOnVy3KFQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1745650304; c=relaxed/simple; bh=FpGdYYKS+5eeu+/xFdA+pUY9D8N0ai38d5p/BVEJr1I=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=GOQvrt5EAJq4EEwCR911vhwhXSCjg07Z806+9MkYJlMt/SbqGSf7bMQ8+P1q73oxSTA4wr/FSRfnodzyyPDn2RaF0yb0/++kGKn6RMIAfGMl2dYZTawOnAd13Cz0LNgro1ZvRG6RazgihKPEye5nOuTew2lVlM1lgRe1mCgAlaQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=jr5yTNSV; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="jr5yTNSV" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 555CEC4CEEB; Sat, 26 Apr 2025 06:51:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1745650302; bh=FpGdYYKS+5eeu+/xFdA+pUY9D8N0ai38d5p/BVEJr1I=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=jr5yTNSVUR36D86/zCuUCifjMjsvXYWjC9NSH2kvyZaq3bt4IAiJ8wLWGrhLiOuvV z+ZVl6pjnWgpJjH7AUdOk67LuzQsmVNlbqS3+jWLh5Z6nx8Zn/OPFzvv1l1SWHQPmn kh7XEPAH0Ve6vDkcfRuaUZuNjsmlye/VDCIXpVjabDe0YQIpWprGpIeMPMwVFtXZiu xl3CnrB6JlMEpqLbh/DpJJeWvbPkvwj8x32D3MVsphz9gpXGrm5zl4VteBcIdHqoCa 8zPWXadBH8Ou0LVdH9XE+C5I3cdmZxyFVWagcBEAX92H3gAQuc8uJJfXkvyHGaOWPJ ElgDgXP/dsxGw== From: Eric Biggers To: linux-crypto@vger.kernel.org Cc: linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, sparclinux@vger.kernel.org, linux-s390@vger.kernel.org, x86@kernel.org, Ard Biesheuvel , "Jason A . Donenfeld " , Linus Torvalds Subject: [PATCH 06/13] crypto: powerpc/sha256 - implement library instead of shash Date: Fri, 25 Apr 2025 23:50:32 -0700 Message-ID: <20250426065041.1551914-7-ebiggers@kernel.org> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250426065041.1551914-1-ebiggers@kernel.org> References: <20250426065041.1551914-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Eric Biggers Instead of providing crypto_shash algorithms for the arch-optimized SHA-256 code, instead implement the SHA-256 library. This is much simpler, it makes the SHA-256 library functions be arch-optimized, and it fixes the longstanding issue where the arch-optimized SHA-256 was disabled by default. SHA-256 still remains available through crypto_shash, but individual architectures no longer need to handle it. Signed-off-by: Eric Biggers --- arch/powerpc/crypto/Kconfig | 11 -- arch/powerpc/crypto/Makefile | 2 - arch/powerpc/crypto/sha256-spe-glue.c | 128 ------------------ arch/powerpc/lib/crypto/Kconfig | 6 + arch/powerpc/lib/crypto/Makefile | 3 + .../powerpc/{ =3D> lib}/crypto/sha256-spe-asm.S | 0 arch/powerpc/lib/crypto/sha256.c | 70 ++++++++++ 7 files changed, 79 insertions(+), 141 deletions(-) delete mode 100644 arch/powerpc/crypto/sha256-spe-glue.c rename arch/powerpc/{ =3D> lib}/crypto/sha256-spe-asm.S (100%) create mode 100644 arch/powerpc/lib/crypto/sha256.c diff --git a/arch/powerpc/crypto/Kconfig b/arch/powerpc/crypto/Kconfig index 4bf7b01228e72..caaa359f47420 100644 --- a/arch/powerpc/crypto/Kconfig +++ b/arch/powerpc/crypto/Kconfig @@ -37,21 +37,10 @@ config CRYPTO_SHA1_PPC_SPE SHA-1 secure hash algorithm (FIPS 180) =20 Architecture: powerpc using - SPE (Signal Processing Engine) extensions =20 -config CRYPTO_SHA256_PPC_SPE - tristate "Hash functions: SHA-224 and SHA-256 (SPE)" - depends on SPE - select CRYPTO_SHA256 - select CRYPTO_HASH - help - SHA-224 and SHA-256 secure hash algorithms (FIPS 180) - - Architecture: powerpc using - - SPE (Signal Processing Engine) extensions - config CRYPTO_AES_PPC_SPE tristate "Ciphers: AES, modes: ECB/CBC/CTR/XTS (SPE)" depends on SPE select CRYPTO_SKCIPHER help diff --git a/arch/powerpc/crypto/Makefile b/arch/powerpc/crypto/Makefile index f13aec8a18335..8c2936ae466fc 100644 --- a/arch/powerpc/crypto/Makefile +++ b/arch/powerpc/crypto/Makefile @@ -7,20 +7,18 @@ =20 obj-$(CONFIG_CRYPTO_AES_PPC_SPE) +=3D aes-ppc-spe.o obj-$(CONFIG_CRYPTO_MD5_PPC) +=3D md5-ppc.o obj-$(CONFIG_CRYPTO_SHA1_PPC) +=3D sha1-powerpc.o obj-$(CONFIG_CRYPTO_SHA1_PPC_SPE) +=3D sha1-ppc-spe.o -obj-$(CONFIG_CRYPTO_SHA256_PPC_SPE) +=3D sha256-ppc-spe.o obj-$(CONFIG_CRYPTO_AES_GCM_P10) +=3D aes-gcm-p10-crypto.o obj-$(CONFIG_CRYPTO_DEV_VMX_ENCRYPT) +=3D vmx-crypto.o obj-$(CONFIG_CRYPTO_CURVE25519_PPC64) +=3D curve25519-ppc64le.o =20 aes-ppc-spe-y :=3D aes-spe-core.o aes-spe-keys.o aes-tab-4k.o aes-spe-mode= s.o aes-spe-glue.o md5-ppc-y :=3D md5-asm.o md5-glue.o sha1-powerpc-y :=3D sha1-powerpc-asm.o sha1.o sha1-ppc-spe-y :=3D sha1-spe-asm.o sha1-spe-glue.o -sha256-ppc-spe-y :=3D sha256-spe-asm.o sha256-spe-glue.o aes-gcm-p10-crypto-y :=3D aes-gcm-p10-glue.o aes-gcm-p10.o ghashp10-ppc.o = aesp10-ppc.o vmx-crypto-objs :=3D vmx.o aesp8-ppc.o ghashp8-ppc.o aes.o aes_cbc.o aes_c= tr.o aes_xts.o ghash.o curve25519-ppc64le-y :=3D curve25519-ppc64le-core.o curve25519-ppc64le_asm= .o =20 ifeq ($(CONFIG_CPU_LITTLE_ENDIAN),y) diff --git a/arch/powerpc/crypto/sha256-spe-glue.c b/arch/powerpc/crypto/sh= a256-spe-glue.c deleted file mode 100644 index 42c76bf8062dc..0000000000000 --- a/arch/powerpc/crypto/sha256-spe-glue.c +++ /dev/null @@ -1,128 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0-or-later -/* - * Glue code for SHA-256 implementation for SPE instructions (PPC) - * - * Based on generic implementation. The assembler module takes care=20 - * about the SPE registers so it can run from interrupt context. - * - * Copyright (c) 2015 Markus Stockhausen - */ - -#include -#include -#include -#include -#include -#include -#include - -/* - * MAX_BYTES defines the number of bytes that are allowed to be processed - * between preempt_disable() and preempt_enable(). SHA256 takes ~2,000 - * operations per 64 bytes. e500 cores can issue two arithmetic instructio= ns - * per clock cycle using one 32/64 bit unit (SU1) and one 32 bit unit (SU2= ). - * Thus 1KB of input data will need an estimated maximum of 18,000 cycles. - * Headroom for cache misses included. Even with the low end model clocked - * at 667 MHz this equals to a critical time window of less than 27us. - * - */ -#define MAX_BYTES 1024 - -extern void ppc_spe_sha256_transform(u32 *state, const u8 *src, u32 blocks= ); - -static void spe_begin(void) -{ - /* We just start SPE operations and will save SPE registers later. */ - preempt_disable(); - enable_kernel_spe(); -} - -static void spe_end(void) -{ - disable_kernel_spe(); - /* reenable preemption */ - preempt_enable(); -} - -static void ppc_spe_sha256_block(struct crypto_sha256_state *sctx, - const u8 *src, int blocks) -{ - do { - /* cut input data into smaller blocks */ - int unit =3D min(blocks, MAX_BYTES / SHA256_BLOCK_SIZE); - - spe_begin(); - ppc_spe_sha256_transform(sctx->state, src, unit); - spe_end(); - - src +=3D unit * SHA256_BLOCK_SIZE; - blocks -=3D unit; - } while (blocks); -} - -static int ppc_spe_sha256_update(struct shash_desc *desc, const u8 *data, - unsigned int len) -{ - return sha256_base_do_update_blocks(desc, data, len, - ppc_spe_sha256_block); -} - -static int ppc_spe_sha256_finup(struct shash_desc *desc, const u8 *src, - unsigned int len, u8 *out) -{ - sha256_base_do_finup(desc, src, len, ppc_spe_sha256_block); - return sha256_base_finish(desc, out); -} - -static struct shash_alg algs[2] =3D { { - .digestsize =3D SHA256_DIGEST_SIZE, - .init =3D sha256_base_init, - .update =3D ppc_spe_sha256_update, - .finup =3D ppc_spe_sha256_finup, - .descsize =3D sizeof(struct crypto_sha256_state), - .base =3D { - .cra_name =3D "sha256", - .cra_driver_name=3D "sha256-ppc-spe", - .cra_priority =3D 300, - .cra_flags =3D CRYPTO_AHASH_ALG_BLOCK_ONLY | - CRYPTO_AHASH_ALG_FINUP_MAX, - .cra_blocksize =3D SHA256_BLOCK_SIZE, - .cra_module =3D THIS_MODULE, - } -}, { - .digestsize =3D SHA224_DIGEST_SIZE, - .init =3D sha224_base_init, - .update =3D ppc_spe_sha256_update, - .finup =3D ppc_spe_sha256_finup, - .descsize =3D sizeof(struct crypto_sha256_state), - .base =3D { - .cra_name =3D "sha224", - .cra_driver_name=3D "sha224-ppc-spe", - .cra_priority =3D 300, - .cra_flags =3D CRYPTO_AHASH_ALG_BLOCK_ONLY | - CRYPTO_AHASH_ALG_FINUP_MAX, - .cra_blocksize =3D SHA224_BLOCK_SIZE, - .cra_module =3D THIS_MODULE, - } -} }; - -static int __init ppc_spe_sha256_mod_init(void) -{ - return crypto_register_shashes(algs, ARRAY_SIZE(algs)); -} - -static void __exit ppc_spe_sha256_mod_fini(void) -{ - crypto_unregister_shashes(algs, ARRAY_SIZE(algs)); -} - -module_init(ppc_spe_sha256_mod_init); -module_exit(ppc_spe_sha256_mod_fini); - -MODULE_LICENSE("GPL"); -MODULE_DESCRIPTION("SHA-224 and SHA-256 Secure Hash Algorithm, SPE optimiz= ed"); - -MODULE_ALIAS_CRYPTO("sha224"); -MODULE_ALIAS_CRYPTO("sha224-ppc-spe"); -MODULE_ALIAS_CRYPTO("sha256"); -MODULE_ALIAS_CRYPTO("sha256-ppc-spe"); diff --git a/arch/powerpc/lib/crypto/Kconfig b/arch/powerpc/lib/crypto/Kcon= fig index bf6d0ab22c27d..ffa541ad6d5da 100644 --- a/arch/powerpc/lib/crypto/Kconfig +++ b/arch/powerpc/lib/crypto/Kconfig @@ -11,5 +11,11 @@ config CRYPTO_POLY1305_P10 tristate depends on PPC64 && CPU_LITTLE_ENDIAN && VSX default CRYPTO_LIB_POLY1305 select CRYPTO_ARCH_HAVE_LIB_POLY1305 select CRYPTO_LIB_POLY1305_GENERIC + +config CRYPTO_SHA256_PPC_SPE + tristate + depends on SPE + default CRYPTO_LIB_SHA256 + select CRYPTO_ARCH_HAVE_LIB_SHA256 diff --git a/arch/powerpc/lib/crypto/Makefile b/arch/powerpc/lib/crypto/Mak= efile index 5709ae14258a0..27f231f8e334a 100644 --- a/arch/powerpc/lib/crypto/Makefile +++ b/arch/powerpc/lib/crypto/Makefile @@ -3,5 +3,8 @@ obj-$(CONFIG_CRYPTO_CHACHA20_P10) +=3D chacha-p10-crypto.o chacha-p10-crypto-y :=3D chacha-p10-glue.o chacha-p10le-8x.o =20 obj-$(CONFIG_CRYPTO_POLY1305_P10) +=3D poly1305-p10-crypto.o poly1305-p10-crypto-y :=3D poly1305-p10-glue.o poly1305-p10le_64.o + +obj-$(CONFIG_CRYPTO_SHA256_PPC_SPE) +=3D sha256-ppc-spe.o +sha256-ppc-spe-y :=3D sha256.o sha256-spe-asm.o diff --git a/arch/powerpc/crypto/sha256-spe-asm.S b/arch/powerpc/lib/crypto= /sha256-spe-asm.S similarity index 100% rename from arch/powerpc/crypto/sha256-spe-asm.S rename to arch/powerpc/lib/crypto/sha256-spe-asm.S diff --git a/arch/powerpc/lib/crypto/sha256.c b/arch/powerpc/lib/crypto/sha= 256.c new file mode 100644 index 0000000000000..c05023c5acdd4 --- /dev/null +++ b/arch/powerpc/lib/crypto/sha256.c @@ -0,0 +1,70 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +/* + * SHA-256 Secure Hash Algorithm, SPE optimized + * + * Based on generic implementation. The assembler module takes care + * about the SPE registers so it can run from interrupt context. + * + * Copyright (c) 2015 Markus Stockhausen + */ + +#include +#include +#include +#include +#include + +/* + * MAX_BYTES defines the number of bytes that are allowed to be processed + * between preempt_disable() and preempt_enable(). SHA256 takes ~2,000 + * operations per 64 bytes. e500 cores can issue two arithmetic instructio= ns + * per clock cycle using one 32/64 bit unit (SU1) and one 32 bit unit (SU2= ). + * Thus 1KB of input data will need an estimated maximum of 18,000 cycles. + * Headroom for cache misses included. Even with the low end model clocked + * at 667 MHz this equals to a critical time window of less than 27us. + * + */ +#define MAX_BYTES 1024 + +extern void ppc_spe_sha256_transform(u32 *state, const u8 *src, u32 blocks= ); + +static void spe_begin(void) +{ + /* We just start SPE operations and will save SPE registers later. */ + preempt_disable(); + enable_kernel_spe(); +} + +static void spe_end(void) +{ + disable_kernel_spe(); + /* reenable preemption */ + preempt_enable(); +} + +void sha256_blocks_arch(u32 state[SHA256_STATE_WORDS], + const u8 *data, size_t nblocks) +{ + do { + /* cut input data into smaller blocks */ + u32 unit =3D min_t(size_t, nblocks, + MAX_BYTES / SHA256_BLOCK_SIZE); + + spe_begin(); + ppc_spe_sha256_transform(state, data, unit); + spe_end(); + + data +=3D unit * SHA256_BLOCK_SIZE; + nblocks -=3D unit; + } while (nblocks); +} +EXPORT_SYMBOL(sha256_blocks_arch); + +bool sha256_is_arch_optimized(void) +{ + return true; +} +EXPORT_SYMBOL(sha256_is_arch_optimized); + +MODULE_LICENSE("GPL"); +MODULE_DESCRIPTION("SHA-256 Secure Hash Algorithm, SPE optimized"); --=20 2.49.0 From nobody Sat Feb 7 05:57:37 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3EE641F8AC5; Sat, 26 Apr 2025 06:51:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1745650304; cv=none; b=XHSc9rQsW99EBrfiODiUwn2oCenALx/cdnPT/nvvNLr4XTV3vPjwp28noNsSqH/iGIlXNZTfCzEAvC6pcXivJKDdejwn7BfdCJznKzV+PEeTBtjUpPAHPrwUqyTozUiIGnAJ1Grl7fCyj1WeVV0/98gIOHXFKt1K8wnoUGohVQ0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1745650304; c=relaxed/simple; bh=UZ7gk36w4cLhvdSqxxmiIouNdLfsFUW3VDR+CsRAy7M=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=u/+XiJEMdVP27QVSO2YNjSKwOVFikyjVytDEdEH9N+xr+j3y10le7ROMLY7qjbUq4gCry/Xar783EImih1knQbcS1nBKrf466OwthjO0sV8WsiT1NM6oc8GG2XFEYrYdci3SjUqy3ecp98+f7AvYX2mGnVce6oJ2gJDco7VgxN0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=hYvVNJGT; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="hYvVNJGT" Received: by smtp.kernel.org (Postfix) with ESMTPSA id D69C3C4AF09; Sat, 26 Apr 2025 06:51:42 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1745650303; bh=UZ7gk36w4cLhvdSqxxmiIouNdLfsFUW3VDR+CsRAy7M=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=hYvVNJGT8ujqsbv9iLRdhnRH2EVdLSfVMF3KylRGcYiRa0w67YFrmttBwnv8KDt3D suPcS2N0HLdcI36hvEKAsQVNMaN0lnG/5KmJxZ6jlp0swkA6YuP3OTMMFz0E3GFI7k BT9DtV3sbnu8iFXlo3so1zJleKv1xl9hlFSgQdWK1kPWj5je2qEcD4nduQvXU7L+Wr 8KZm/xSs1i7+cDw+vxkVXmrbgidRN5S6HgcJcM/7iyziGjsTYaYPcDfeWpVuVYlrAw if2rzX78lx7tylhDLoorg9i5VOeDQZ9dsxoJqdt57G5T46OTjCjf7eD+ixFyteWKcj uwF6GhIpW3z6w== From: Eric Biggers To: linux-crypto@vger.kernel.org Cc: linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, sparclinux@vger.kernel.org, linux-s390@vger.kernel.org, x86@kernel.org, Ard Biesheuvel , "Jason A . Donenfeld " , Linus Torvalds Subject: [PATCH 07/13] crypto: riscv/sha256 - implement library instead of shash Date: Fri, 25 Apr 2025 23:50:33 -0700 Message-ID: <20250426065041.1551914-8-ebiggers@kernel.org> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250426065041.1551914-1-ebiggers@kernel.org> References: <20250426065041.1551914-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Eric Biggers Instead of providing crypto_shash algorithms for the arch-optimized SHA-256 code, instead implement the SHA-256 library. This is much simpler, it makes the SHA-256 library functions be arch-optimized, and it fixes the longstanding issue where the arch-optimized SHA-256 was disabled by default. SHA-256 still remains available through crypto_shash, but individual architectures no longer need to handle it. To match sha256_blocks_arch(), change the type of the nblocks parameter of the assembly function from int to size_t. The assembly function actually already treated it as size_t. Signed-off-by: Eric Biggers --- arch/riscv/crypto/Kconfig | 11 -- arch/riscv/crypto/Makefile | 3 - arch/riscv/crypto/sha256-riscv64-glue.c | 125 ------------------ arch/riscv/lib/crypto/Kconfig | 7 + arch/riscv/lib/crypto/Makefile | 3 + .../sha256-riscv64-zvknha_or_zvknhb-zvkb.S | 4 +- arch/riscv/lib/crypto/sha256.c | 62 +++++++++ 7 files changed, 74 insertions(+), 141 deletions(-) delete mode 100644 arch/riscv/crypto/sha256-riscv64-glue.c rename arch/riscv/{ =3D> lib}/crypto/sha256-riscv64-zvknha_or_zvknhb-zvkb.= S (98%) create mode 100644 arch/riscv/lib/crypto/sha256.c diff --git a/arch/riscv/crypto/Kconfig b/arch/riscv/crypto/Kconfig index 4863be2a4ec2f..cd9b776602f89 100644 --- a/arch/riscv/crypto/Kconfig +++ b/arch/riscv/crypto/Kconfig @@ -26,21 +26,10 @@ config CRYPTO_GHASH_RISCV64 GCM GHASH function (NIST SP 800-38D) =20 Architecture: riscv64 using: - Zvkg vector crypto extension =20 -config CRYPTO_SHA256_RISCV64 - tristate "Hash functions: SHA-224 and SHA-256" - depends on 64BIT && RISCV_ISA_V && TOOLCHAIN_HAS_VECTOR_CRYPTO - select CRYPTO_SHA256 - help - SHA-224 and SHA-256 secure hash algorithm (FIPS 180) - - Architecture: riscv64 using: - - Zvknha or Zvknhb vector crypto extensions - - Zvkb vector crypto extension - config CRYPTO_SHA512_RISCV64 tristate "Hash functions: SHA-384 and SHA-512" depends on 64BIT && RISCV_ISA_V && TOOLCHAIN_HAS_VECTOR_CRYPTO select CRYPTO_SHA512 help diff --git a/arch/riscv/crypto/Makefile b/arch/riscv/crypto/Makefile index 4ae9bf762e907..e10e8257734e3 100644 --- a/arch/riscv/crypto/Makefile +++ b/arch/riscv/crypto/Makefile @@ -5,13 +5,10 @@ aes-riscv64-y :=3D aes-riscv64-glue.o aes-riscv64-zvkned.= o \ aes-riscv64-zvkned-zvbb-zvkg.o aes-riscv64-zvkned-zvkb.o =20 obj-$(CONFIG_CRYPTO_GHASH_RISCV64) +=3D ghash-riscv64.o ghash-riscv64-y :=3D ghash-riscv64-glue.o ghash-riscv64-zvkg.o =20 -obj-$(CONFIG_CRYPTO_SHA256_RISCV64) +=3D sha256-riscv64.o -sha256-riscv64-y :=3D sha256-riscv64-glue.o sha256-riscv64-zvknha_or_zvknh= b-zvkb.o - obj-$(CONFIG_CRYPTO_SHA512_RISCV64) +=3D sha512-riscv64.o sha512-riscv64-y :=3D sha512-riscv64-glue.o sha512-riscv64-zvknhb-zvkb.o =20 obj-$(CONFIG_CRYPTO_SM3_RISCV64) +=3D sm3-riscv64.o sm3-riscv64-y :=3D sm3-riscv64-glue.o sm3-riscv64-zvksh-zvkb.o diff --git a/arch/riscv/crypto/sha256-riscv64-glue.c b/arch/riscv/crypto/sh= a256-riscv64-glue.c deleted file mode 100644 index c998300ab8435..0000000000000 --- a/arch/riscv/crypto/sha256-riscv64-glue.c +++ /dev/null @@ -1,125 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0-or-later -/* - * SHA-256 and SHA-224 using the RISC-V vector crypto extensions - * - * Copyright (C) 2022 VRULL GmbH - * Author: Heiko Stuebner - * - * Copyright (C) 2023 SiFive, Inc. - * Author: Jerry Shih - */ - -#include -#include -#include -#include -#include -#include -#include - -/* - * Note: the asm function only uses the 'state' field of struct sha256_sta= te. - * It is assumed to be the first field. - */ -asmlinkage void sha256_transform_zvknha_or_zvknhb_zvkb( - struct crypto_sha256_state *state, const u8 *data, int num_blocks); - -static void sha256_block(struct crypto_sha256_state *state, const u8 *data, - int num_blocks) -{ - /* - * Ensure struct crypto_sha256_state begins directly with the SHA-256 - * 256-bit internal state, as this is what the asm function expects. - */ - BUILD_BUG_ON(offsetof(struct crypto_sha256_state, state) !=3D 0); - - if (crypto_simd_usable()) { - kernel_vector_begin(); - sha256_transform_zvknha_or_zvknhb_zvkb(state, data, num_blocks); - kernel_vector_end(); - } else - sha256_transform_blocks(state, data, num_blocks); -} - -static int riscv64_sha256_update(struct shash_desc *desc, const u8 *data, - unsigned int len) -{ - return sha256_base_do_update_blocks(desc, data, len, sha256_block); -} - -static int riscv64_sha256_finup(struct shash_desc *desc, const u8 *data, - unsigned int len, u8 *out) -{ - sha256_base_do_finup(desc, data, len, sha256_block); - return sha256_base_finish(desc, out); -} - -static int riscv64_sha256_digest(struct shash_desc *desc, const u8 *data, - unsigned int len, u8 *out) -{ - return sha256_base_init(desc) ?: - riscv64_sha256_finup(desc, data, len, out); -} - -static struct shash_alg riscv64_sha256_algs[] =3D { - { - .init =3D sha256_base_init, - .update =3D riscv64_sha256_update, - .finup =3D riscv64_sha256_finup, - .digest =3D riscv64_sha256_digest, - .descsize =3D sizeof(struct crypto_sha256_state), - .digestsize =3D SHA256_DIGEST_SIZE, - .base =3D { - .cra_blocksize =3D SHA256_BLOCK_SIZE, - .cra_flags =3D CRYPTO_AHASH_ALG_BLOCK_ONLY | - CRYPTO_AHASH_ALG_FINUP_MAX, - .cra_priority =3D 300, - .cra_name =3D "sha256", - .cra_driver_name =3D "sha256-riscv64-zvknha_or_zvknhb-zvkb", - .cra_module =3D THIS_MODULE, - }, - }, { - .init =3D sha224_base_init, - .update =3D riscv64_sha256_update, - .finup =3D riscv64_sha256_finup, - .descsize =3D sizeof(struct crypto_sha256_state), - .digestsize =3D SHA224_DIGEST_SIZE, - .base =3D { - .cra_blocksize =3D SHA224_BLOCK_SIZE, - .cra_flags =3D CRYPTO_AHASH_ALG_BLOCK_ONLY | - CRYPTO_AHASH_ALG_FINUP_MAX, - .cra_priority =3D 300, - .cra_name =3D "sha224", - .cra_driver_name =3D "sha224-riscv64-zvknha_or_zvknhb-zvkb", - .cra_module =3D THIS_MODULE, - }, - }, -}; - -static int __init riscv64_sha256_mod_init(void) -{ - /* Both zvknha and zvknhb provide the SHA-256 instructions. */ - if ((riscv_isa_extension_available(NULL, ZVKNHA) || - riscv_isa_extension_available(NULL, ZVKNHB)) && - riscv_isa_extension_available(NULL, ZVKB) && - riscv_vector_vlen() >=3D 128) - return crypto_register_shashes(riscv64_sha256_algs, - ARRAY_SIZE(riscv64_sha256_algs)); - - return -ENODEV; -} - -static void __exit riscv64_sha256_mod_exit(void) -{ - crypto_unregister_shashes(riscv64_sha256_algs, - ARRAY_SIZE(riscv64_sha256_algs)); -} - -module_init(riscv64_sha256_mod_init); -module_exit(riscv64_sha256_mod_exit); - -MODULE_DESCRIPTION("SHA-256 (RISC-V accelerated)"); -MODULE_AUTHOR("Heiko Stuebner "); -MODULE_LICENSE("GPL"); -MODULE_ALIAS_CRYPTO("sha256"); -MODULE_ALIAS_CRYPTO("sha224"); diff --git a/arch/riscv/lib/crypto/Kconfig b/arch/riscv/lib/crypto/Kconfig index bc7a43f33eb3a..c100571feb7e8 100644 --- a/arch/riscv/lib/crypto/Kconfig +++ b/arch/riscv/lib/crypto/Kconfig @@ -4,5 +4,12 @@ config CRYPTO_CHACHA_RISCV64 tristate depends on 64BIT && RISCV_ISA_V && TOOLCHAIN_HAS_VECTOR_CRYPTO default CRYPTO_LIB_CHACHA select CRYPTO_ARCH_HAVE_LIB_CHACHA select CRYPTO_LIB_CHACHA_GENERIC + +config CRYPTO_SHA256_RISCV64 + tristate + depends on 64BIT && RISCV_ISA_V && TOOLCHAIN_HAS_VECTOR_CRYPTO + default CRYPTO_LIB_SHA256 + select CRYPTO_ARCH_HAVE_LIB_SHA256 + select CRYPTO_LIB_SHA256_GENERIC diff --git a/arch/riscv/lib/crypto/Makefile b/arch/riscv/lib/crypto/Makefile index e27b78f317fc8..b7cb877a2c07e 100644 --- a/arch/riscv/lib/crypto/Makefile +++ b/arch/riscv/lib/crypto/Makefile @@ -1,4 +1,7 @@ # SPDX-License-Identifier: GPL-2.0-only =20 obj-$(CONFIG_CRYPTO_CHACHA_RISCV64) +=3D chacha-riscv64.o chacha-riscv64-y :=3D chacha-riscv64-glue.o chacha-riscv64-zvkb.o + +obj-$(CONFIG_CRYPTO_SHA256_RISCV64) +=3D sha256-riscv64.o +sha256-riscv64-y :=3D sha256.o sha256-riscv64-zvknha_or_zvknhb-zvkb.o diff --git a/arch/riscv/crypto/sha256-riscv64-zvknha_or_zvknhb-zvkb.S b/arc= h/riscv/lib/crypto/sha256-riscv64-zvknha_or_zvknhb-zvkb.S similarity index 98% rename from arch/riscv/crypto/sha256-riscv64-zvknha_or_zvknhb-zvkb.S rename to arch/riscv/lib/crypto/sha256-riscv64-zvknha_or_zvknhb-zvkb.S index f1f5779e47323..fad501ad06171 100644 --- a/arch/riscv/crypto/sha256-riscv64-zvknha_or_zvknhb-zvkb.S +++ b/arch/riscv/lib/crypto/sha256-riscv64-zvknha_or_zvknhb-zvkb.S @@ -104,12 +104,12 @@ sha256_4rounds \last, \k1, W1, W2, W3, W0 sha256_4rounds \last, \k2, W2, W3, W0, W1 sha256_4rounds \last, \k3, W3, W0, W1, W2 .endm =20 -// void sha256_transform_zvknha_or_zvknhb_zvkb(u32 state[8], const u8 *dat= a, -// int num_blocks); +// void sha256_transform_zvknha_or_zvknhb_zvkb(u32 state[SHA256_STATE_WORD= S], +// const u8 *data, size_t nblocks); SYM_FUNC_START(sha256_transform_zvknha_or_zvknhb_zvkb) =20 // Load the round constants into K0-K15. vsetivli zero, 4, e32, m1, ta, ma la t0, K256 diff --git a/arch/riscv/lib/crypto/sha256.c b/arch/riscv/lib/crypto/sha256.c new file mode 100644 index 0000000000000..18b84030f0b39 --- /dev/null +++ b/arch/riscv/lib/crypto/sha256.c @@ -0,0 +1,62 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +/* + * SHA-256 (RISC-V accelerated) + * + * Copyright (C) 2022 VRULL GmbH + * Author: Heiko Stuebner + * + * Copyright (C) 2023 SiFive, Inc. + * Author: Jerry Shih + */ + +#include +#include +#include +#include +#include +#include + +asmlinkage void sha256_transform_zvknha_or_zvknhb_zvkb( + u32 state[SHA256_STATE_WORDS], const u8 *data, size_t nblocks); + +static __ro_after_init DEFINE_STATIC_KEY_FALSE(have_extensions); + +void sha256_blocks_arch(u32 state[SHA256_STATE_WORDS], + const u8 *data, size_t nblocks) +{ + if (static_branch_likely(&have_extensions) && crypto_simd_usable()) { + kernel_vector_begin(); + sha256_transform_zvknha_or_zvknhb_zvkb(state, data, nblocks); + kernel_vector_end(); + } else { + sha256_blocks_generic(state, data, nblocks); + } +} +EXPORT_SYMBOL(sha256_blocks_arch); + +bool sha256_is_arch_optimized(void) +{ + return static_key_enabled(&have_extensions); +} +EXPORT_SYMBOL(sha256_is_arch_optimized); + +static int __init riscv64_sha256_mod_init(void) +{ + /* Both zvknha and zvknhb provide the SHA-256 instructions. */ + if ((riscv_isa_extension_available(NULL, ZVKNHA) || + riscv_isa_extension_available(NULL, ZVKNHB)) && + riscv_isa_extension_available(NULL, ZVKB) && + riscv_vector_vlen() >=3D 128) + static_branch_enable(&have_extensions); + return 0; +} +arch_initcall(riscv64_sha256_mod_init); + +static void __exit riscv64_sha256_mod_exit(void) +{ +} +module_exit(riscv64_sha256_mod_exit); + +MODULE_DESCRIPTION("SHA-256 (RISC-V accelerated)"); +MODULE_AUTHOR("Heiko Stuebner "); +MODULE_LICENSE("GPL"); --=20 2.49.0 From nobody Sat Feb 7 05:57:37 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3EDD11F8AC0; Sat, 26 Apr 2025 06:51:43 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1745650304; cv=none; b=qt7OnNEG8PHA4PcRjGh7rMfhfxjuhyhTn8BtQoB6RvcxtumpGcZvA35DOpa3ZI2aKoaCOQbK/WCfnq4XZWGXeR8axx2kYWUPkL0sQy5dGYjtLXilNsxDRY9bzbuTqa2Lxi5r0EIwK9Oz5uPF8URvOoDkaKKTX7kopP16s5hLXEk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1745650304; c=relaxed/simple; bh=fl9W9kw6x9BtAcwxv3i+lwM7Nx55GbUHoaN+qSOzLYc=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=tA3V3cMRI+drexZxOw7T+4sDUFa2dDpZJSpd1xWq4QgCEcom7ojHTyZxJJgjGYiH28TolO5ZZsoMd0paWL6guZcZNtGKPcy4Iybi80XQHwyHVUbnZlEMCT1AnzpTGW3gWYe8sfrKKPA5iMad8JfY++NgRuhc/Y3BEnMJmXGD0Xg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=ZOhqNi6j; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="ZOhqNi6j" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 63090C4CEEF; Sat, 26 Apr 2025 06:51:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1745650303; bh=fl9W9kw6x9BtAcwxv3i+lwM7Nx55GbUHoaN+qSOzLYc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ZOhqNi6jKvwE/bGNJHEEzumLVwD4vQ4MdZmRQyxWFjNVj9gLa8xf7HHYYcgDr8cNt ey2CgVy31XIuZz0sZl0MIsJtb4VpzZpe7TKsv2l02Z0rdslmGj42z+UEGiIecNVJjW okWg21t2PACCIz7mefRbEBpLp0d/oyIj02HKGE0uVlLc4JVscx7c4EHfk40T4zM90O JN72RyEDsNIivdlySFaoDmYdqTO1N/g2oLLJDqD0YkqdAJAkhkbM9r1dNO1j07ezMV D4f5Yww36zjfTz40EbVhi9CK3eVl5oPoXRJX9MruzTclUBWQQ6+TUBrwWqIKjUhuwN x7kRfImsZ0WDg== From: Eric Biggers To: linux-crypto@vger.kernel.org Cc: linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, sparclinux@vger.kernel.org, linux-s390@vger.kernel.org, x86@kernel.org, Ard Biesheuvel , "Jason A . Donenfeld " , Linus Torvalds Subject: [PATCH 08/13] crypto: s390/sha256 - implement library instead of shash Date: Fri, 25 Apr 2025 23:50:34 -0700 Message-ID: <20250426065041.1551914-9-ebiggers@kernel.org> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250426065041.1551914-1-ebiggers@kernel.org> References: <20250426065041.1551914-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Eric Biggers Instead of providing crypto_shash algorithms for the arch-optimized SHA-256 code, instead implement the SHA-256 library. This is much simpler, it makes the SHA-256 library functions be arch-optimized, and it fixes the longstanding issue where the arch-optimized SHA-256 was disabled by default. SHA-256 still remains available through crypto_shash, but individual architectures no longer need to handle it. Signed-off-by: Eric Biggers --- arch/s390/configs/debug_defconfig | 1 - arch/s390/configs/defconfig | 1 - arch/s390/crypto/Kconfig | 10 --- arch/s390/crypto/Makefile | 1 - arch/s390/crypto/sha256_s390.c | 144 ------------------------------ arch/s390/lib/crypto/Kconfig | 6 ++ arch/s390/lib/crypto/Makefile | 2 + arch/s390/lib/crypto/sha256.c | 47 ++++++++++ 8 files changed, 55 insertions(+), 157 deletions(-) delete mode 100644 arch/s390/crypto/sha256_s390.c create mode 100644 arch/s390/lib/crypto/sha256.c diff --git a/arch/s390/configs/debug_defconfig b/arch/s390/configs/debug_de= fconfig index 6f2c9ce1b1548..de69faa4d94f3 100644 --- a/arch/s390/configs/debug_defconfig +++ b/arch/s390/configs/debug_defconfig @@ -793,11 +793,10 @@ CONFIG_CRYPTO_USER_API_HASH=3Dm CONFIG_CRYPTO_USER_API_SKCIPHER=3Dm CONFIG_CRYPTO_USER_API_RNG=3Dm CONFIG_CRYPTO_USER_API_AEAD=3Dm CONFIG_CRYPTO_SHA512_S390=3Dm CONFIG_CRYPTO_SHA1_S390=3Dm -CONFIG_CRYPTO_SHA256_S390=3Dm CONFIG_CRYPTO_SHA3_256_S390=3Dm CONFIG_CRYPTO_SHA3_512_S390=3Dm CONFIG_CRYPTO_GHASH_S390=3Dm CONFIG_CRYPTO_AES_S390=3Dm CONFIG_CRYPTO_DES_S390=3Dm diff --git a/arch/s390/configs/defconfig b/arch/s390/configs/defconfig index f18a7d97ac216..f12679448e976 100644 --- a/arch/s390/configs/defconfig +++ b/arch/s390/configs/defconfig @@ -780,11 +780,10 @@ CONFIG_CRYPTO_USER_API_HASH=3Dm CONFIG_CRYPTO_USER_API_SKCIPHER=3Dm CONFIG_CRYPTO_USER_API_RNG=3Dm CONFIG_CRYPTO_USER_API_AEAD=3Dm CONFIG_CRYPTO_SHA512_S390=3Dm CONFIG_CRYPTO_SHA1_S390=3Dm -CONFIG_CRYPTO_SHA256_S390=3Dm CONFIG_CRYPTO_SHA3_256_S390=3Dm CONFIG_CRYPTO_SHA3_512_S390=3Dm CONFIG_CRYPTO_GHASH_S390=3Dm CONFIG_CRYPTO_AES_S390=3Dm CONFIG_CRYPTO_DES_S390=3Dm diff --git a/arch/s390/crypto/Kconfig b/arch/s390/crypto/Kconfig index a2bfd6eef0ca3..e2c27588b21a9 100644 --- a/arch/s390/crypto/Kconfig +++ b/arch/s390/crypto/Kconfig @@ -20,20 +20,10 @@ config CRYPTO_SHA1_S390 =20 Architecture: s390 =20 It is available as of z990. =20 -config CRYPTO_SHA256_S390 - tristate "Hash functions: SHA-224 and SHA-256" - select CRYPTO_HASH - help - SHA-224 and SHA-256 secure hash algorithms (FIPS 180) - - Architecture: s390 - - It is available as of z9. - config CRYPTO_SHA3_256_S390 tristate "Hash functions: SHA3-224 and SHA3-256" select CRYPTO_HASH help SHA3-224 and SHA3-256 secure hash algorithms (FIPS 202) diff --git a/arch/s390/crypto/Makefile b/arch/s390/crypto/Makefile index e3853774e1a3a..21757d86cd499 100644 --- a/arch/s390/crypto/Makefile +++ b/arch/s390/crypto/Makefile @@ -2,11 +2,10 @@ # # Cryptographic API # =20 obj-$(CONFIG_CRYPTO_SHA1_S390) +=3D sha1_s390.o sha_common.o -obj-$(CONFIG_CRYPTO_SHA256_S390) +=3D sha256_s390.o sha_common.o obj-$(CONFIG_CRYPTO_SHA512_S390) +=3D sha512_s390.o sha_common.o obj-$(CONFIG_CRYPTO_SHA3_256_S390) +=3D sha3_256_s390.o sha_common.o obj-$(CONFIG_CRYPTO_SHA3_512_S390) +=3D sha3_512_s390.o sha_common.o obj-$(CONFIG_CRYPTO_DES_S390) +=3D des_s390.o obj-$(CONFIG_CRYPTO_AES_S390) +=3D aes_s390.o diff --git a/arch/s390/crypto/sha256_s390.c b/arch/s390/crypto/sha256_s390.c deleted file mode 100644 index e6876c49414d5..0000000000000 --- a/arch/s390/crypto/sha256_s390.c +++ /dev/null @@ -1,144 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0+ -/* - * Cryptographic API. - * - * s390 implementation of the SHA256 and SHA224 Secure Hash Algorithm. - * - * s390 Version: - * Copyright IBM Corp. 2005, 2011 - * Author(s): Jan Glauber (jang@de.ibm.com) - */ -#include -#include -#include -#include -#include -#include -#include - -#include "sha.h" - -static int s390_sha256_init(struct shash_desc *desc) -{ - struct s390_sha_ctx *sctx =3D shash_desc_ctx(desc); - - sctx->state[0] =3D SHA256_H0; - sctx->state[1] =3D SHA256_H1; - sctx->state[2] =3D SHA256_H2; - sctx->state[3] =3D SHA256_H3; - sctx->state[4] =3D SHA256_H4; - sctx->state[5] =3D SHA256_H5; - sctx->state[6] =3D SHA256_H6; - sctx->state[7] =3D SHA256_H7; - sctx->count =3D 0; - sctx->func =3D CPACF_KIMD_SHA_256; - - return 0; -} - -static int sha256_export(struct shash_desc *desc, void *out) -{ - struct s390_sha_ctx *sctx =3D shash_desc_ctx(desc); - struct crypto_sha256_state *octx =3D out; - - octx->count =3D sctx->count; - memcpy(octx->state, sctx->state, sizeof(octx->state)); - return 0; -} - -static int sha256_import(struct shash_desc *desc, const void *in) -{ - struct s390_sha_ctx *sctx =3D shash_desc_ctx(desc); - const struct crypto_sha256_state *ictx =3D in; - - sctx->count =3D ictx->count; - memcpy(sctx->state, ictx->state, sizeof(ictx->state)); - sctx->func =3D CPACF_KIMD_SHA_256; - return 0; -} - -static struct shash_alg sha256_alg =3D { - .digestsize =3D SHA256_DIGEST_SIZE, - .init =3D s390_sha256_init, - .update =3D s390_sha_update_blocks, - .finup =3D s390_sha_finup, - .export =3D sha256_export, - .import =3D sha256_import, - .descsize =3D S390_SHA_CTX_SIZE, - .statesize =3D sizeof(struct crypto_sha256_state), - .base =3D { - .cra_name =3D "sha256", - .cra_driver_name=3D "sha256-s390", - .cra_priority =3D 300, - .cra_flags =3D CRYPTO_AHASH_ALG_BLOCK_ONLY, - .cra_blocksize =3D SHA256_BLOCK_SIZE, - .cra_module =3D THIS_MODULE, - } -}; - -static int s390_sha224_init(struct shash_desc *desc) -{ - struct s390_sha_ctx *sctx =3D shash_desc_ctx(desc); - - sctx->state[0] =3D SHA224_H0; - sctx->state[1] =3D SHA224_H1; - sctx->state[2] =3D SHA224_H2; - sctx->state[3] =3D SHA224_H3; - sctx->state[4] =3D SHA224_H4; - sctx->state[5] =3D SHA224_H5; - sctx->state[6] =3D SHA224_H6; - sctx->state[7] =3D SHA224_H7; - sctx->count =3D 0; - sctx->func =3D CPACF_KIMD_SHA_256; - - return 0; -} - -static struct shash_alg sha224_alg =3D { - .digestsize =3D SHA224_DIGEST_SIZE, - .init =3D s390_sha224_init, - .update =3D s390_sha_update_blocks, - .finup =3D s390_sha_finup, - .export =3D sha256_export, - .import =3D sha256_import, - .descsize =3D S390_SHA_CTX_SIZE, - .statesize =3D sizeof(struct crypto_sha256_state), - .base =3D { - .cra_name =3D "sha224", - .cra_driver_name=3D "sha224-s390", - .cra_priority =3D 300, - .cra_flags =3D CRYPTO_AHASH_ALG_BLOCK_ONLY, - .cra_blocksize =3D SHA224_BLOCK_SIZE, - .cra_module =3D THIS_MODULE, - } -}; - -static int __init sha256_s390_init(void) -{ - int ret; - - if (!cpacf_query_func(CPACF_KIMD, CPACF_KIMD_SHA_256)) - return -ENODEV; - ret =3D crypto_register_shash(&sha256_alg); - if (ret < 0) - goto out; - ret =3D crypto_register_shash(&sha224_alg); - if (ret < 0) - crypto_unregister_shash(&sha256_alg); -out: - return ret; -} - -static void __exit sha256_s390_fini(void) -{ - crypto_unregister_shash(&sha224_alg); - crypto_unregister_shash(&sha256_alg); -} - -module_cpu_feature_match(S390_CPU_FEATURE_MSA, sha256_s390_init); -module_exit(sha256_s390_fini); - -MODULE_ALIAS_CRYPTO("sha256"); -MODULE_ALIAS_CRYPTO("sha224"); -MODULE_LICENSE("GPL"); -MODULE_DESCRIPTION("SHA256 and SHA224 Secure Hash Algorithm"); diff --git a/arch/s390/lib/crypto/Kconfig b/arch/s390/lib/crypto/Kconfig index 069b355fe51aa..e3f855ef43934 100644 --- a/arch/s390/lib/crypto/Kconfig +++ b/arch/s390/lib/crypto/Kconfig @@ -3,5 +3,11 @@ config CRYPTO_CHACHA_S390 tristate default CRYPTO_LIB_CHACHA select CRYPTO_LIB_CHACHA_GENERIC select CRYPTO_ARCH_HAVE_LIB_CHACHA + +config CRYPTO_SHA256_S390 + tristate + default CRYPTO_LIB_SHA256 + select CRYPTO_ARCH_HAVE_LIB_SHA256 + select CRYPTO_LIB_SHA256_GENERIC diff --git a/arch/s390/lib/crypto/Makefile b/arch/s390/lib/crypto/Makefile index 06c2cf77178ef..920197967f463 100644 --- a/arch/s390/lib/crypto/Makefile +++ b/arch/s390/lib/crypto/Makefile @@ -1,4 +1,6 @@ # SPDX-License-Identifier: GPL-2.0-only =20 obj-$(CONFIG_CRYPTO_CHACHA_S390) +=3D chacha_s390.o chacha_s390-y :=3D chacha-glue.o chacha-s390.o + +obj-$(CONFIG_CRYPTO_SHA256_S390) +=3D sha256.o diff --git a/arch/s390/lib/crypto/sha256.c b/arch/s390/lib/crypto/sha256.c new file mode 100644 index 0000000000000..50c592ce7a5de --- /dev/null +++ b/arch/s390/lib/crypto/sha256.c @@ -0,0 +1,47 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +/* + * SHA-256 optimized using the CP Assist for Cryptographic Functions (CPAC= F) + * + * Copyright 2025 Google LLC + */ +#include +#include +#include +#include +#include + +static __ro_after_init DEFINE_STATIC_KEY_FALSE(have_cpacf_sha256); + +void sha256_blocks_arch(u32 state[SHA256_STATE_WORDS], + const u8 *data, size_t nblocks) +{ + if (static_branch_likely(&have_cpacf_sha256)) + cpacf_kimd(CPACF_KIMD_SHA_256, state, data, + nblocks * SHA256_BLOCK_SIZE); + else + sha256_blocks_generic(state, data, nblocks); +} +EXPORT_SYMBOL(sha256_blocks_arch); + +bool sha256_is_arch_optimized(void) +{ + return static_key_enabled(&have_cpacf_sha256); +} +EXPORT_SYMBOL(sha256_is_arch_optimized); + +static int __init sha256_s390_mod_init(void) +{ + if (cpu_have_feature(S390_CPU_FEATURE_MSA) && + cpacf_query_func(CPACF_KIMD, CPACF_KIMD_SHA_256)) + static_branch_enable(&have_cpacf_sha256); + return 0; +} +arch_initcall(sha256_s390_mod_init); + +static void __exit sha256_s390_mod_exit(void) +{ +} +module_exit(sha256_s390_mod_exit); + +MODULE_LICENSE("GPL"); +MODULE_DESCRIPTION("SHA-256 using the CP Assist for Cryptographic Function= s (CPACF)"); --=20 2.49.0 From nobody Sat Feb 7 05:57:37 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E00E01FDE33; Sat, 26 Apr 2025 06:51:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1745650305; cv=none; b=eEk7lcPjHhuaJW96upDf0yJwt7ut8M2Hcauj7npoJ1hByLLf0+i64/+Iz4t6+dH8J1a+drR6skPgGYY3E/nyrncSr2NbjkAcdU0JYlh+IQKmEMkxDf8UwCMXMCfy7oKrT0SRySOf53YeXmdTSziRFAVRuUIZipV4UDYz8qalntk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1745650305; c=relaxed/simple; bh=frmoNMX9rkL1gGxdvxYvF+q9FW0efq8uiNqM3qhMIXk=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=GFvXffKDCkZydyiTqb4fsM60KfCvSiwDY/H265ZtCMhV58OVmZqdHfmKGrDzp+BRnJx93FXWTo+LUhHLBlHWaQNymwBdLVYrqOSoy+14do7rxhc38rOG8mJovbPq63W0mAFqNih/AITH2+Os4M/prSO0yGzJKuL03oKZEIwTHJg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=oPSkMeWn; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="oPSkMeWn" Received: by smtp.kernel.org (Postfix) with ESMTPSA id E3427C4AF0B; Sat, 26 Apr 2025 06:51:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1745650304; bh=frmoNMX9rkL1gGxdvxYvF+q9FW0efq8uiNqM3qhMIXk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=oPSkMeWnyIurxq0SeQFDNzck9Xk9VpviNNmbWNvBFfJnjngDZ4UbsQ3+hzGEEe6pR XkY7x9RxNFerL1WyjtJY7UsXDx8/LWDM2Mk8/xU8jDvaNc5i06RwYwYTDBOeSdRZHx mXbC/WYzRNQce77OCmcsYYuAZZqTIIrwR/nRCjOSgD8/j1JSRsu4oUTjM1r0kjriUz UllbW+msgPPJPe9qphniSzEwoxgqRj6kM9klnOR/mm0izM78qogHtW/xS45FKaLKJd xNc5nEdH9k3pOB7KOvQOskLI5BfxB4Wl7ymTdSpKd6BpZrP/YwJiuTQiomEfSnj/DO bDZZz//wkLHag== From: Eric Biggers To: linux-crypto@vger.kernel.org Cc: linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, sparclinux@vger.kernel.org, linux-s390@vger.kernel.org, x86@kernel.org, Ard Biesheuvel , "Jason A . Donenfeld " , Linus Torvalds Subject: [PATCH 09/13] crypto: sparc - move opcodes.h into asm directory Date: Fri, 25 Apr 2025 23:50:35 -0700 Message-ID: <20250426065041.1551914-10-ebiggers@kernel.org> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250426065041.1551914-1-ebiggers@kernel.org> References: <20250426065041.1551914-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Eric Biggers Since arch/sparc/crypto/opcodes.h is now needed outside the arch/sparc/crypto/ directory, move it into arch/sparc/include/asm/ so that it can be included as . Signed-off-by: Eric Biggers --- arch/sparc/crypto/aes_asm.S | 3 +-- arch/sparc/crypto/aes_glue.c | 3 +-- arch/sparc/crypto/camellia_asm.S | 3 +-- arch/sparc/crypto/camellia_glue.c | 3 +-- arch/sparc/crypto/des_asm.S | 3 +-- arch/sparc/crypto/des_glue.c | 3 +-- arch/sparc/crypto/md5_asm.S | 3 +-- arch/sparc/crypto/md5_glue.c | 3 +-- arch/sparc/crypto/sha1_asm.S | 3 +-- arch/sparc/crypto/sha1_glue.c | 3 +-- arch/sparc/crypto/sha256_asm.S | 3 +-- arch/sparc/crypto/sha256_glue.c | 3 +-- arch/sparc/crypto/sha512_asm.S | 3 +-- arch/sparc/crypto/sha512_glue.c | 3 +-- arch/sparc/{crypto =3D> include/asm}/opcodes.h | 6 +++--- arch/sparc/lib/crc32c_asm.S | 3 +-- 16 files changed, 18 insertions(+), 33 deletions(-) rename arch/sparc/{crypto =3D> include/asm}/opcodes.h (96%) diff --git a/arch/sparc/crypto/aes_asm.S b/arch/sparc/crypto/aes_asm.S index 155cefb98520e..f291174a72a1d 100644 --- a/arch/sparc/crypto/aes_asm.S +++ b/arch/sparc/crypto/aes_asm.S @@ -1,11 +1,10 @@ /* SPDX-License-Identifier: GPL-2.0 */ #include +#include #include =20 -#include "opcodes.h" - #define ENCRYPT_TWO_ROUNDS(KEY_BASE, I0, I1, T0, T1) \ AES_EROUND01(KEY_BASE + 0, I0, I1, T0) \ AES_EROUND23(KEY_BASE + 2, I0, I1, T1) \ AES_EROUND01(KEY_BASE + 4, T0, T1, I0) \ AES_EROUND23(KEY_BASE + 6, T0, T1, I1) diff --git a/arch/sparc/crypto/aes_glue.c b/arch/sparc/crypto/aes_glue.c index 6831508303562..359f22643b051 100644 --- a/arch/sparc/crypto/aes_glue.c +++ b/arch/sparc/crypto/aes_glue.c @@ -25,15 +25,14 @@ #include #include #include =20 #include +#include #include #include =20 -#include "opcodes.h" - struct aes_ops { void (*encrypt)(const u64 *key, const u32 *input, u32 *output); void (*decrypt)(const u64 *key, const u32 *input, u32 *output); void (*load_encrypt_keys)(const u64 *key); void (*load_decrypt_keys)(const u64 *key); diff --git a/arch/sparc/crypto/camellia_asm.S b/arch/sparc/crypto/camellia_= asm.S index dcdc9193fcd72..8471b346ef548 100644 --- a/arch/sparc/crypto/camellia_asm.S +++ b/arch/sparc/crypto/camellia_asm.S @@ -1,11 +1,10 @@ /* SPDX-License-Identifier: GPL-2.0 */ #include +#include #include =20 -#include "opcodes.h" - #define CAMELLIA_6ROUNDS(KEY_BASE, I0, I1) \ CAMELLIA_F(KEY_BASE + 0, I1, I0, I1) \ CAMELLIA_F(KEY_BASE + 2, I0, I1, I0) \ CAMELLIA_F(KEY_BASE + 4, I1, I0, I1) \ CAMELLIA_F(KEY_BASE + 6, I0, I1, I0) \ diff --git a/arch/sparc/crypto/camellia_glue.c b/arch/sparc/crypto/camellia= _glue.c index aaa9714378e66..e7a1e1c42b996 100644 --- a/arch/sparc/crypto/camellia_glue.c +++ b/arch/sparc/crypto/camellia_glue.c @@ -13,15 +13,14 @@ #include #include #include =20 #include +#include #include #include =20 -#include "opcodes.h" - #define CAMELLIA_MIN_KEY_SIZE 16 #define CAMELLIA_MAX_KEY_SIZE 32 #define CAMELLIA_BLOCK_SIZE 16 #define CAMELLIA_TABLE_BYTE_LEN 272 =20 diff --git a/arch/sparc/crypto/des_asm.S b/arch/sparc/crypto/des_asm.S index 7157468a679df..d534446cbef9a 100644 --- a/arch/sparc/crypto/des_asm.S +++ b/arch/sparc/crypto/des_asm.S @@ -1,11 +1,10 @@ /* SPDX-License-Identifier: GPL-2.0 */ #include +#include #include =20 -#include "opcodes.h" - .align 32 ENTRY(des_sparc64_key_expand) /* %o0=3Dinput_key, %o1=3Doutput_key */ VISEntryHalf ld [%o0 + 0x00], %f0 diff --git a/arch/sparc/crypto/des_glue.c b/arch/sparc/crypto/des_glue.c index a499102bf7065..e50ec4cd57cde 100644 --- a/arch/sparc/crypto/des_glue.c +++ b/arch/sparc/crypto/des_glue.c @@ -14,15 +14,14 @@ #include #include #include =20 #include +#include #include #include =20 -#include "opcodes.h" - struct des_sparc64_ctx { u64 encrypt_expkey[DES_EXPKEY_WORDS / 2]; u64 decrypt_expkey[DES_EXPKEY_WORDS / 2]; }; =20 diff --git a/arch/sparc/crypto/md5_asm.S b/arch/sparc/crypto/md5_asm.S index 7a6637455f37a..60b544e4d205b 100644 --- a/arch/sparc/crypto/md5_asm.S +++ b/arch/sparc/crypto/md5_asm.S @@ -1,11 +1,10 @@ /* SPDX-License-Identifier: GPL-2.0 */ #include +#include #include =20 -#include "opcodes.h" - ENTRY(md5_sparc64_transform) /* %o0 =3D digest, %o1 =3D data, %o2 =3D rounds */ VISEntryHalf ld [%o0 + 0x00], %f0 ld [%o0 + 0x04], %f1 diff --git a/arch/sparc/crypto/md5_glue.c b/arch/sparc/crypto/md5_glue.c index 5b018c6a376c4..b3615f0cdf626 100644 --- a/arch/sparc/crypto/md5_glue.c +++ b/arch/sparc/crypto/md5_glue.c @@ -13,21 +13,20 @@ */ =20 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt =20 #include +#include #include #include #include #include #include #include #include #include =20 -#include "opcodes.h" - struct sparc_md5_state { __le32 hash[MD5_HASH_WORDS]; u64 byte_count; }; =20 diff --git a/arch/sparc/crypto/sha1_asm.S b/arch/sparc/crypto/sha1_asm.S index 7d8bf354f0e79..00b46bac1b08f 100644 --- a/arch/sparc/crypto/sha1_asm.S +++ b/arch/sparc/crypto/sha1_asm.S @@ -1,11 +1,10 @@ /* SPDX-License-Identifier: GPL-2.0 */ #include +#include #include =20 -#include "opcodes.h" - ENTRY(sha1_sparc64_transform) /* %o0 =3D digest, %o1 =3D data, %o2 =3D rounds */ VISEntryHalf ld [%o0 + 0x00], %f0 ld [%o0 + 0x04], %f1 diff --git a/arch/sparc/crypto/sha1_glue.c b/arch/sparc/crypto/sha1_glue.c index ec5a06948e0d4..ef19d5023b1bc 100644 --- a/arch/sparc/crypto/sha1_glue.c +++ b/arch/sparc/crypto/sha1_glue.c @@ -10,19 +10,18 @@ */ =20 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt =20 #include +#include #include #include #include #include #include #include =20 -#include "opcodes.h" - asmlinkage void sha1_sparc64_transform(struct sha1_state *digest, const u8 *data, int rounds); =20 static int sha1_sparc64_update(struct shash_desc *desc, const u8 *data, unsigned int len) diff --git a/arch/sparc/crypto/sha256_asm.S b/arch/sparc/crypto/sha256_asm.S index 0b39ec7d7ca29..8ce88611e98ad 100644 --- a/arch/sparc/crypto/sha256_asm.S +++ b/arch/sparc/crypto/sha256_asm.S @@ -1,11 +1,10 @@ /* SPDX-License-Identifier: GPL-2.0 */ #include +#include #include =20 -#include "opcodes.h" - ENTRY(sha256_sparc64_transform) /* %o0 =3D digest, %o1 =3D data, %o2 =3D rounds */ VISEntryHalf ld [%o0 + 0x00], %f0 ld [%o0 + 0x04], %f1 diff --git a/arch/sparc/crypto/sha256_glue.c b/arch/sparc/crypto/sha256_glu= e.c index ddb250242faf4..25008603a9868 100644 --- a/arch/sparc/crypto/sha256_glue.c +++ b/arch/sparc/crypto/sha256_glue.c @@ -10,19 +10,18 @@ */ =20 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt =20 #include +#include #include #include #include #include #include #include =20 -#include "opcodes.h" - asmlinkage void sha256_sparc64_transform(u32 *digest, const char *data, unsigned int rounds); =20 static void sha256_block(struct crypto_sha256_state *sctx, const u8 *src, int blocks) diff --git a/arch/sparc/crypto/sha512_asm.S b/arch/sparc/crypto/sha512_asm.S index b2f6e67288023..9932b4fe1b599 100644 --- a/arch/sparc/crypto/sha512_asm.S +++ b/arch/sparc/crypto/sha512_asm.S @@ -1,11 +1,10 @@ /* SPDX-License-Identifier: GPL-2.0 */ #include +#include #include =20 -#include "opcodes.h" - ENTRY(sha512_sparc64_transform) /* %o0 =3D digest, %o1 =3D data, %o2 =3D rounds */ VISEntry ldd [%o0 + 0x00], %f0 ldd [%o0 + 0x08], %f2 diff --git a/arch/sparc/crypto/sha512_glue.c b/arch/sparc/crypto/sha512_glu= e.c index 1d0e1f98ca461..47b9277b6877a 100644 --- a/arch/sparc/crypto/sha512_glue.c +++ b/arch/sparc/crypto/sha512_glue.c @@ -9,19 +9,18 @@ */ =20 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt =20 #include +#include #include #include #include #include #include #include =20 -#include "opcodes.h" - asmlinkage void sha512_sparc64_transform(u64 *digest, const char *data, unsigned int rounds); =20 static void sha512_block(struct sha512_state *sctx, const u8 *src, int blo= cks) { diff --git a/arch/sparc/crypto/opcodes.h b/arch/sparc/include/asm/opcodes.h similarity index 96% rename from arch/sparc/crypto/opcodes.h rename to arch/sparc/include/asm/opcodes.h index 417b6a10a337a..ebfda6eb49b26 100644 --- a/arch/sparc/crypto/opcodes.h +++ b/arch/sparc/include/asm/opcodes.h @@ -1,8 +1,8 @@ /* SPDX-License-Identifier: GPL-2.0 */ -#ifndef _OPCODES_H -#define _OPCODES_H +#ifndef _SPARC_ASM_OPCODES_H +#define _SPARC_ASM_OPCODES_H =20 #define SPARC_CR_OPCODE_PRIORITY 300 =20 #define F3F(x,y,z) (((x)<<30)|((y)<<19)|((z)<<5)) =20 @@ -95,6 +95,6 @@ #define MOVXTOD_G3_F60 \ .word 0xbbb02303; #define MOVXTOD_G7_F62 \ .word 0xbfb02307; =20 -#endif /* _OPCODES_H */ +#endif /* _SPARC_ASM_OPCODES_H */ diff --git a/arch/sparc/lib/crc32c_asm.S b/arch/sparc/lib/crc32c_asm.S index ee454fa6aed68..4db873850f44c 100644 --- a/arch/sparc/lib/crc32c_asm.S +++ b/arch/sparc/lib/crc32c_asm.S @@ -1,12 +1,11 @@ /* SPDX-License-Identifier: GPL-2.0 */ #include +#include #include #include =20 -#include "../crypto/opcodes.h" - ENTRY(crc32c_sparc64) /* %o0=3Dcrc32p, %o1=3Ddata_ptr, %o2=3Dlen */ VISEntryHalf lda [%o0] ASI_PL, %f1 1: ldd [%o1], %f2 --=20 2.49.0 From nobody Sat Feb 7 05:57:37 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 50FF31FF1A1; Sat, 26 Apr 2025 06:51:44 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1745650305; cv=none; b=oYd2wrLI+n4GUDGdNpQGnmKxm+HFN1HH2//VaSJGwHke20yYjqVdogsTEeDblif9PSAAyRzGd8M7e4mkiKqXpZlrwyZpsbwlzb2rytOCrMJWh/TDctDAZJANo3wYp+hWzs0+inFC4VkWuGtSg6KGYXzDKYbOKwRsWeGUcbyr88o= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1745650305; c=relaxed/simple; bh=EyLlHErF32h0yhzCTHHerfcTZv7f2HWfwZtPUgAb89A=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=mbotJppZCljTRThLbHGCoxNgZ5UikFiZiTGIWNmbEuhiJrcpAfCm3L3CSRWcV2U70T5V6PsiY/GszCB/v676SHf5t23FIs3DJPrW8+opfLLd/Nqm7gTL3IXQnS4nuf1hSlbnOC0hrHFx75peGE3kdK5ugLn50l7uJHX9x9LFVm8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=W8b9lF0G; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="W8b9lF0G" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 6F8CCC4CEE2; Sat, 26 Apr 2025 06:51:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1745650304; bh=EyLlHErF32h0yhzCTHHerfcTZv7f2HWfwZtPUgAb89A=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=W8b9lF0GTKdCQnFJepwIXJc9/EjfxSRSsoBFkJ2EndJRDSCOM2S7nwlOwEbT3KkHp 4W5RjYMRpiluT7pPBCfE9p86DCYISUtbwyGsbnrg/NQG7A9jgVi5dKsNa3njliD8mv MjcxiiVap9K4wv/0gyKZzdybBtt+IEyQ93zksutPaexH7+8JG6TTKnZ3qEA2AU0moD JilXKg2IKwdwxACeHW9l50P48i2YOc7EiOsoKSOVCuh80BdihwtL5oqSeAqPJaoUhU 2+tL6fUuBO3HHd39kdzToiG2i6zssXR0pIGlnjPjWF27R3360z6nGCXMTK/VRq3AUg 6KL0KHoMDgXag== From: Eric Biggers To: linux-crypto@vger.kernel.org Cc: linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, sparclinux@vger.kernel.org, linux-s390@vger.kernel.org, x86@kernel.org, Ard Biesheuvel , "Jason A . Donenfeld " , Linus Torvalds Subject: [PATCH 10/13] crypto: sparc/sha256 - implement library instead of shash Date: Fri, 25 Apr 2025 23:50:36 -0700 Message-ID: <20250426065041.1551914-11-ebiggers@kernel.org> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250426065041.1551914-1-ebiggers@kernel.org> References: <20250426065041.1551914-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Eric Biggers Instead of providing crypto_shash algorithms for the arch-optimized SHA-256 code, instead implement the SHA-256 library. This is much simpler, it makes the SHA-256 library functions be arch-optimized, and it fixes the longstanding issue where the arch-optimized SHA-256 was disabled by default. SHA-256 still remains available through crypto_shash, but individual architectures no longer need to handle it. Signed-off-by: Eric Biggers --- arch/sparc/crypto/Kconfig | 10 -- arch/sparc/crypto/Makefile | 2 - arch/sparc/crypto/sha256_glue.c | 128 ----------------------- arch/sparc/lib/Makefile | 1 + arch/sparc/lib/crypto/Kconfig | 8 ++ arch/sparc/lib/crypto/Makefile | 4 + arch/sparc/lib/crypto/sha256.c | 64 ++++++++++++ arch/sparc/{ =3D> lib}/crypto/sha256_asm.S | 2 +- lib/crypto/Kconfig | 3 + 9 files changed, 81 insertions(+), 141 deletions(-) delete mode 100644 arch/sparc/crypto/sha256_glue.c create mode 100644 arch/sparc/lib/crypto/Kconfig create mode 100644 arch/sparc/lib/crypto/Makefile create mode 100644 arch/sparc/lib/crypto/sha256.c rename arch/sparc/{ =3D> lib}/crypto/sha256_asm.S (96%) diff --git a/arch/sparc/crypto/Kconfig b/arch/sparc/crypto/Kconfig index e858597de89db..a6ba319c42dce 100644 --- a/arch/sparc/crypto/Kconfig +++ b/arch/sparc/crypto/Kconfig @@ -34,20 +34,10 @@ config CRYPTO_SHA1_SPARC64 help SHA-1 secure hash algorithm (FIPS 180) =20 Architecture: sparc64 =20 -config CRYPTO_SHA256_SPARC64 - tristate "Hash functions: SHA-224 and SHA-256" - depends on SPARC64 - select CRYPTO_SHA256 - select CRYPTO_HASH - help - SHA-224 and SHA-256 secure hash algorithms (FIPS 180) - - Architecture: sparc64 using crypto instructions, when available - config CRYPTO_SHA512_SPARC64 tristate "Hash functions: SHA-384 and SHA-512" depends on SPARC64 select CRYPTO_SHA512 select CRYPTO_HASH diff --git a/arch/sparc/crypto/Makefile b/arch/sparc/crypto/Makefile index a2d7fca40cb4b..701c39edb0d73 100644 --- a/arch/sparc/crypto/Makefile +++ b/arch/sparc/crypto/Makefile @@ -2,20 +2,18 @@ # # Arch-specific CryptoAPI modules. # =20 obj-$(CONFIG_CRYPTO_SHA1_SPARC64) +=3D sha1-sparc64.o -obj-$(CONFIG_CRYPTO_SHA256_SPARC64) +=3D sha256-sparc64.o obj-$(CONFIG_CRYPTO_SHA512_SPARC64) +=3D sha512-sparc64.o obj-$(CONFIG_CRYPTO_MD5_SPARC64) +=3D md5-sparc64.o =20 obj-$(CONFIG_CRYPTO_AES_SPARC64) +=3D aes-sparc64.o obj-$(CONFIG_CRYPTO_DES_SPARC64) +=3D des-sparc64.o obj-$(CONFIG_CRYPTO_CAMELLIA_SPARC64) +=3D camellia-sparc64.o =20 sha1-sparc64-y :=3D sha1_asm.o sha1_glue.o -sha256-sparc64-y :=3D sha256_asm.o sha256_glue.o sha512-sparc64-y :=3D sha512_asm.o sha512_glue.o md5-sparc64-y :=3D md5_asm.o md5_glue.o =20 aes-sparc64-y :=3D aes_asm.o aes_glue.o des-sparc64-y :=3D des_asm.o des_glue.o diff --git a/arch/sparc/crypto/sha256_glue.c b/arch/sparc/crypto/sha256_glu= e.c deleted file mode 100644 index 25008603a9868..0000000000000 --- a/arch/sparc/crypto/sha256_glue.c +++ /dev/null @@ -1,128 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0-only -/* Glue code for SHA256 hashing optimized for sparc64 crypto opcodes. - * - * This is based largely upon crypto/sha256_generic.c - * - * Copyright (c) Jean-Luc Cooke - * Copyright (c) Andrew McDonald - * Copyright (c) 2002 James Morris - * SHA224 Support Copyright 2007 Intel Corporation - */ - -#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt - -#include -#include -#include -#include -#include -#include -#include -#include - -asmlinkage void sha256_sparc64_transform(u32 *digest, const char *data, - unsigned int rounds); - -static void sha256_block(struct crypto_sha256_state *sctx, const u8 *src, - int blocks) -{ - sha256_sparc64_transform(sctx->state, src, blocks); -} - -static int sha256_sparc64_update(struct shash_desc *desc, const u8 *data, - unsigned int len) -{ - return sha256_base_do_update_blocks(desc, data, len, sha256_block); -} - -static int sha256_sparc64_finup(struct shash_desc *desc, const u8 *src, - unsigned int len, u8 *out) -{ - sha256_base_do_finup(desc, src, len, sha256_block); - return sha256_base_finish(desc, out); -} - -static struct shash_alg sha256_alg =3D { - .digestsize =3D SHA256_DIGEST_SIZE, - .init =3D sha256_base_init, - .update =3D sha256_sparc64_update, - .finup =3D sha256_sparc64_finup, - .descsize =3D sizeof(struct crypto_sha256_state), - .base =3D { - .cra_name =3D "sha256", - .cra_driver_name=3D "sha256-sparc64", - .cra_priority =3D SPARC_CR_OPCODE_PRIORITY, - .cra_flags =3D CRYPTO_AHASH_ALG_BLOCK_ONLY | - CRYPTO_AHASH_ALG_FINUP_MAX, - .cra_blocksize =3D SHA256_BLOCK_SIZE, - .cra_module =3D THIS_MODULE, - } -}; - -static struct shash_alg sha224_alg =3D { - .digestsize =3D SHA224_DIGEST_SIZE, - .init =3D sha224_base_init, - .update =3D sha256_sparc64_update, - .finup =3D sha256_sparc64_finup, - .descsize =3D sizeof(struct crypto_sha256_state), - .base =3D { - .cra_name =3D "sha224", - .cra_driver_name=3D "sha224-sparc64", - .cra_priority =3D SPARC_CR_OPCODE_PRIORITY, - .cra_flags =3D CRYPTO_AHASH_ALG_BLOCK_ONLY | - CRYPTO_AHASH_ALG_FINUP_MAX, - .cra_blocksize =3D SHA224_BLOCK_SIZE, - .cra_module =3D THIS_MODULE, - } -}; - -static bool __init sparc64_has_sha256_opcode(void) -{ - unsigned long cfr; - - if (!(sparc64_elf_hwcap & HWCAP_SPARC_CRYPTO)) - return false; - - __asm__ __volatile__("rd %%asr26, %0" : "=3Dr" (cfr)); - if (!(cfr & CFR_SHA256)) - return false; - - return true; -} - -static int __init sha256_sparc64_mod_init(void) -{ - if (sparc64_has_sha256_opcode()) { - int ret =3D crypto_register_shash(&sha224_alg); - if (ret < 0) - return ret; - - ret =3D crypto_register_shash(&sha256_alg); - if (ret < 0) { - crypto_unregister_shash(&sha224_alg); - return ret; - } - - pr_info("Using sparc64 sha256 opcode optimized SHA-256/SHA-224 implement= ation\n"); - return 0; - } - pr_info("sparc64 sha256 opcode not available.\n"); - return -ENODEV; -} - -static void __exit sha256_sparc64_mod_fini(void) -{ - crypto_unregister_shash(&sha224_alg); - crypto_unregister_shash(&sha256_alg); -} - -module_init(sha256_sparc64_mod_init); -module_exit(sha256_sparc64_mod_fini); - -MODULE_LICENSE("GPL"); -MODULE_DESCRIPTION("SHA-224 and SHA-256 Secure Hash Algorithm, sparc64 sha= 256 opcode accelerated"); - -MODULE_ALIAS_CRYPTO("sha224"); -MODULE_ALIAS_CRYPTO("sha256"); - -#include "crop_devid.c" diff --git a/arch/sparc/lib/Makefile b/arch/sparc/lib/Makefile index 5724d0f356eb5..98887dc295a1e 100644 --- a/arch/sparc/lib/Makefile +++ b/arch/sparc/lib/Makefile @@ -2,10 +2,11 @@ # Makefile for Sparc library files.. # =20 asflags-y :=3D -ansi -DST_DIV0=3D0x02 =20 +obj-y +=3D crypto/ lib-$(CONFIG_SPARC32) +=3D ashrdi3.o lib-$(CONFIG_SPARC32) +=3D memcpy.o memset.o lib-y +=3D strlen.o lib-y +=3D checksum_$(BITS).o lib-$(CONFIG_SPARC32) +=3D blockops.o diff --git a/arch/sparc/lib/crypto/Kconfig b/arch/sparc/lib/crypto/Kconfig new file mode 100644 index 0000000000000..e5c3e4d3dba62 --- /dev/null +++ b/arch/sparc/lib/crypto/Kconfig @@ -0,0 +1,8 @@ +# SPDX-License-Identifier: GPL-2.0-only + +config CRYPTO_SHA256_SPARC64 + tristate + depends on SPARC64 + default CRYPTO_LIB_SHA256 + select CRYPTO_ARCH_HAVE_LIB_SHA256 + select CRYPTO_LIB_SHA256_GENERIC diff --git a/arch/sparc/lib/crypto/Makefile b/arch/sparc/lib/crypto/Makefile new file mode 100644 index 0000000000000..75ee244ad6f79 --- /dev/null +++ b/arch/sparc/lib/crypto/Makefile @@ -0,0 +1,4 @@ +# SPDX-License-Identifier: GPL-2.0-only + +obj-$(CONFIG_CRYPTO_SHA256_SPARC64) +=3D sha256-sparc64.o +sha256-sparc64-y :=3D sha256.o sha256_asm.o diff --git a/arch/sparc/lib/crypto/sha256.c b/arch/sparc/lib/crypto/sha256.c new file mode 100644 index 0000000000000..6f118a23d210a --- /dev/null +++ b/arch/sparc/lib/crypto/sha256.c @@ -0,0 +1,64 @@ +// SPDX-License-Identifier: GPL-2.0-only +/* + * SHA-256 accelerated using the sparc64 sha256 opcodes + * + * Copyright (c) Jean-Luc Cooke + * Copyright (c) Andrew McDonald + * Copyright (c) 2002 James Morris + * SHA224 Support Copyright 2007 Intel Corporation + */ + +#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt + +#include +#include +#include +#include +#include +#include + +static __ro_after_init DEFINE_STATIC_KEY_FALSE(have_sha256_opcodes); + +asmlinkage void sha256_sparc64_transform(u32 state[SHA256_STATE_WORDS], + const u8 *data, size_t nblocks); + +void sha256_blocks_arch(u32 state[SHA256_STATE_WORDS], + const u8 *data, size_t nblocks) +{ + if (static_branch_likely(&have_sha256_opcodes)) + sha256_sparc64_transform(state, data, nblocks); + else + sha256_blocks_generic(state, data, nblocks); +} +EXPORT_SYMBOL(sha256_blocks_arch); + +bool sha256_is_arch_optimized(void) +{ + return static_key_enabled(&have_sha256_opcodes); +} +EXPORT_SYMBOL(sha256_is_arch_optimized); + +static int __init sha256_sparc64_mod_init(void) +{ + unsigned long cfr; + + if (!(sparc64_elf_hwcap & HWCAP_SPARC_CRYPTO)) + return 0; + + __asm__ __volatile__("rd %%asr26, %0" : "=3Dr" (cfr)); + if (!(cfr & CFR_SHA256)) + return 0; + + static_branch_enable(&have_sha256_opcodes); + pr_info("Using sparc64 sha256 opcode optimized SHA-256/SHA-224 implementa= tion\n"); + return 0; +} +arch_initcall(sha256_sparc64_mod_init); + +static void __exit sha256_sparc64_mod_exit(void) +{ +} +module_exit(sha256_sparc64_mod_exit); + +MODULE_LICENSE("GPL"); +MODULE_DESCRIPTION("SHA-256 accelerated using the sparc64 sha256 opcodes"); diff --git a/arch/sparc/crypto/sha256_asm.S b/arch/sparc/lib/crypto/sha256_= asm.S similarity index 96% rename from arch/sparc/crypto/sha256_asm.S rename to arch/sparc/lib/crypto/sha256_asm.S index 8ce88611e98ad..ddcdd3daf31e3 100644 --- a/arch/sparc/crypto/sha256_asm.S +++ b/arch/sparc/lib/crypto/sha256_asm.S @@ -2,11 +2,11 @@ #include #include #include =20 ENTRY(sha256_sparc64_transform) - /* %o0 =3D digest, %o1 =3D data, %o2 =3D rounds */ + /* %o0 =3D state, %o1 =3D data, %o2 =3D nblocks */ VISEntryHalf ld [%o0 + 0x00], %f0 ld [%o0 + 0x04], %f1 ld [%o0 + 0x08], %f2 ld [%o0 + 0x0c], %f3 diff --git a/lib/crypto/Kconfig b/lib/crypto/Kconfig index 7fe678047939b..6319358b38c20 100644 --- a/lib/crypto/Kconfig +++ b/lib/crypto/Kconfig @@ -179,10 +179,13 @@ if RISCV source "arch/riscv/lib/crypto/Kconfig" endif if S390 source "arch/s390/lib/crypto/Kconfig" endif +if SPARC +source "arch/sparc/lib/crypto/Kconfig" +endif if X86 source "arch/x86/lib/crypto/Kconfig" endif endif =20 --=20 2.49.0 From nobody Sat Feb 7 05:57:37 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id EAD7820487E; Sat, 26 Apr 2025 06:51:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1745650307; cv=none; b=Hybg7BY85uTdSGc/MFun/6vBxX3GixzlUqhB69j0Q9Dhl0CDwPi5yYJWYqddIII9a8f1Tu+ti2keI8L+Lc/l6555u7Mbkkb34lQUCgJmRNIaUBBIO5EcJzy/3CX3DAA5iLdOA4KlrpbIqc5iHCSPN006ToSYLcWPDYyR9Ohn3Xk= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1745650307; c=relaxed/simple; bh=Y1aKy5QVBGFd4CpCHS+L+Ce3zC0X9xCmDHzwr3yuur8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=qz7cbGJUvwwQWlh/A03R2i3x8NxusW2nCwvXIg0fGiWik0ce53QGm6BBWY7Q7+BaKAkkaK0WmQKdrtEkwgqzasay5RoaZY8wUzWS1aPIfNgARmZrAV9PsFSWshvRlLbrobdxfVyOr6lSDBBYnTN0K/AOhrPh5V/FXqEhaWMtXuM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=klI3dl5V; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="klI3dl5V" Received: by smtp.kernel.org (Postfix) with ESMTPSA id F08FDC4CEE9; Sat, 26 Apr 2025 06:51:44 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1745650305; bh=Y1aKy5QVBGFd4CpCHS+L+Ce3zC0X9xCmDHzwr3yuur8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=klI3dl5VyA2XiV6jn8USEqA89pDo3sYilrH4FmPC/SrZiFAMpL9hfP9a+lh5ExmGI rwH63NtQHQrrjcZDoaUeZvJTbd0fLAgXurv8cmVcj5iuzePK6JNvNId61LISlvpMcp 8htWvgvuuqvGgr5nx78B8NoQ+RJJ/s46D5HNh7QRhs2dOdZDRZYBcSi7KOP6l0pl5q I5vfbADvScSPY7lRgFSszJs+lLoHUVL5oxVtFEna137/kUKtDDikk1QR1Fu37Z3PEg KzqOvHkx0evyPiyyWLliWlryHRjZa5kFOFvyohoKMmm1vNogELrwdIBARokF1UPlYx imIlxXe/0aqKw== From: Eric Biggers To: linux-crypto@vger.kernel.org Cc: linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, sparclinux@vger.kernel.org, linux-s390@vger.kernel.org, x86@kernel.org, Ard Biesheuvel , "Jason A . Donenfeld " , Linus Torvalds Subject: [PATCH 11/13] crypto: x86/sha256 - implement library instead of shash Date: Fri, 25 Apr 2025 23:50:37 -0700 Message-ID: <20250426065041.1551914-12-ebiggers@kernel.org> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250426065041.1551914-1-ebiggers@kernel.org> References: <20250426065041.1551914-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Eric Biggers Instead of providing crypto_shash algorithms for the arch-optimized SHA-256 code, instead implement the SHA-256 library. This is much simpler, it makes the SHA-256 library functions be arch-optimized, and it fixes the longstanding issue where the arch-optimized SHA-256 was disabled by default. SHA-256 still remains available through crypto_shash, but individual architectures no longer need to handle it. To match sha256_blocks_arch(), change the type of the nblocks parameter of the assembly functions from int to size_t. The assembly functions actually already treated it as size_t. Signed-off-by: Eric Biggers --- arch/x86/crypto/Kconfig | 14 - arch/x86/crypto/Makefile | 3 - arch/x86/crypto/sha256_ssse3_glue.c | 432 ------------------ arch/x86/lib/crypto/Kconfig | 7 + arch/x86/lib/crypto/Makefile | 3 + arch/x86/{ =3D> lib}/crypto/sha256-avx-asm.S | 12 +- arch/x86/{ =3D> lib}/crypto/sha256-avx2-asm.S | 12 +- .../crypto/sha256-ni-asm.S} | 36 +- arch/x86/{ =3D> lib}/crypto/sha256-ssse3-asm.S | 14 +- arch/x86/lib/crypto/sha256.c | 74 +++ 10 files changed, 118 insertions(+), 489 deletions(-) delete mode 100644 arch/x86/crypto/sha256_ssse3_glue.c rename arch/x86/{ =3D> lib}/crypto/sha256-avx-asm.S (98%) rename arch/x86/{ =3D> lib}/crypto/sha256-avx2-asm.S (98%) rename arch/x86/{crypto/sha256_ni_asm.S =3D> lib/crypto/sha256-ni-asm.S} (= 85%) rename arch/x86/{ =3D> lib}/crypto/sha256-ssse3-asm.S (98%) create mode 100644 arch/x86/lib/crypto/sha256.c diff --git a/arch/x86/crypto/Kconfig b/arch/x86/crypto/Kconfig index 9e941362e4cd5..56cfdc79e2c66 100644 --- a/arch/x86/crypto/Kconfig +++ b/arch/x86/crypto/Kconfig @@ -388,24 +388,10 @@ config CRYPTO_SHA1_SSSE3 - SSSE3 (Supplemental SSE3) - AVX (Advanced Vector Extensions) - AVX2 (Advanced Vector Extensions 2) - SHA-NI (SHA Extensions New Instructions) =20 -config CRYPTO_SHA256_SSSE3 - tristate "Hash functions: SHA-224 and SHA-256 (SSSE3/AVX/AVX2/SHA-NI)" - depends on 64BIT - select CRYPTO_SHA256 - select CRYPTO_HASH - help - SHA-224 and SHA-256 secure hash algorithms (FIPS 180) - - Architecture: x86_64 using: - - SSSE3 (Supplemental SSE3) - - AVX (Advanced Vector Extensions) - - AVX2 (Advanced Vector Extensions 2) - - SHA-NI (SHA Extensions New Instructions) - config CRYPTO_SHA512_SSSE3 tristate "Hash functions: SHA-384 and SHA-512 (SSSE3/AVX/AVX2)" depends on 64BIT select CRYPTO_SHA512 select CRYPTO_HASH diff --git a/arch/x86/crypto/Makefile b/arch/x86/crypto/Makefile index fad59a6c6c26f..aa289a9e0153b 100644 --- a/arch/x86/crypto/Makefile +++ b/arch/x86/crypto/Makefile @@ -52,13 +52,10 @@ aesni-intel-$(CONFIG_64BIT) +=3D aes-gcm-avx10-x86_64.o endif =20 obj-$(CONFIG_CRYPTO_SHA1_SSSE3) +=3D sha1-ssse3.o sha1-ssse3-y :=3D sha1_avx2_x86_64_asm.o sha1_ssse3_asm.o sha1_ni_asm.o sh= a1_ssse3_glue.o =20 -obj-$(CONFIG_CRYPTO_SHA256_SSSE3) +=3D sha256-ssse3.o -sha256-ssse3-y :=3D sha256-ssse3-asm.o sha256-avx-asm.o sha256-avx2-asm.o = sha256_ni_asm.o sha256_ssse3_glue.o - obj-$(CONFIG_CRYPTO_SHA512_SSSE3) +=3D sha512-ssse3.o sha512-ssse3-y :=3D sha512-ssse3-asm.o sha512-avx-asm.o sha512-avx2-asm.o = sha512_ssse3_glue.o =20 obj-$(CONFIG_CRYPTO_GHASH_CLMUL_NI_INTEL) +=3D ghash-clmulni-intel.o ghash-clmulni-intel-y :=3D ghash-clmulni-intel_asm.o ghash-clmulni-intel_g= lue.o diff --git a/arch/x86/crypto/sha256_ssse3_glue.c b/arch/x86/crypto/sha256_s= sse3_glue.c deleted file mode 100644 index a5d3be00550b8..0000000000000 --- a/arch/x86/crypto/sha256_ssse3_glue.c +++ /dev/null @@ -1,432 +0,0 @@ -/* - * Cryptographic API. - * - * Glue code for the SHA256 Secure Hash Algorithm assembler implementations - * using SSSE3, AVX, AVX2, and SHA-NI instructions. - * - * This file is based on sha256_generic.c - * - * Copyright (C) 2013 Intel Corporation. - * - * Author: - * Tim Chen - * - * This program is free software; you can redistribute it and/or modify it - * under the terms of the GNU General Public License as published by the F= ree - * Software Foundation; either version 2 of the License, or (at your optio= n) - * any later version. - * - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, - * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND - * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS - * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN - * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN - * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE - * SOFTWARE. - */ - - -#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt - -#include -#include -#include -#include -#include -#include -#include - -asmlinkage void sha256_transform_ssse3(struct crypto_sha256_state *state, - const u8 *data, int blocks); - -static const struct x86_cpu_id module_cpu_ids[] =3D { - X86_MATCH_FEATURE(X86_FEATURE_SHA_NI, NULL), - X86_MATCH_FEATURE(X86_FEATURE_AVX2, NULL), - X86_MATCH_FEATURE(X86_FEATURE_AVX, NULL), - X86_MATCH_FEATURE(X86_FEATURE_SSSE3, NULL), - {} -}; -MODULE_DEVICE_TABLE(x86cpu, module_cpu_ids); - -static int _sha256_update(struct shash_desc *desc, const u8 *data, - unsigned int len, - sha256_block_fn *sha256_xform) -{ - int remain; - - /* - * Make sure struct crypto_sha256_state begins directly with the SHA256 - * 256-bit internal state, as this is what the asm functions expect. - */ - BUILD_BUG_ON(offsetof(struct crypto_sha256_state, state) !=3D 0); - - kernel_fpu_begin(); - remain =3D sha256_base_do_update_blocks(desc, data, len, sha256_xform); - kernel_fpu_end(); - - return remain; -} - -static int sha256_finup(struct shash_desc *desc, const u8 *data, - unsigned int len, u8 *out, sha256_block_fn *sha256_xform) -{ - kernel_fpu_begin(); - sha256_base_do_finup(desc, data, len, sha256_xform); - kernel_fpu_end(); - - return sha256_base_finish(desc, out); -} - -static int sha256_ssse3_update(struct shash_desc *desc, const u8 *data, - unsigned int len) -{ - return _sha256_update(desc, data, len, sha256_transform_ssse3); -} - -static int sha256_ssse3_finup(struct shash_desc *desc, const u8 *data, - unsigned int len, u8 *out) -{ - return sha256_finup(desc, data, len, out, sha256_transform_ssse3); -} - -static int sha256_ssse3_digest(struct shash_desc *desc, const u8 *data, - unsigned int len, u8 *out) -{ - return sha256_base_init(desc) ?: - sha256_ssse3_finup(desc, data, len, out); -} - -static struct shash_alg sha256_ssse3_algs[] =3D { { - .digestsize =3D SHA256_DIGEST_SIZE, - .init =3D sha256_base_init, - .update =3D sha256_ssse3_update, - .finup =3D sha256_ssse3_finup, - .digest =3D sha256_ssse3_digest, - .descsize =3D sizeof(struct crypto_sha256_state), - .base =3D { - .cra_name =3D "sha256", - .cra_driver_name =3D "sha256-ssse3", - .cra_priority =3D 150, - .cra_flags =3D CRYPTO_AHASH_ALG_BLOCK_ONLY | - CRYPTO_AHASH_ALG_FINUP_MAX, - .cra_blocksize =3D SHA256_BLOCK_SIZE, - .cra_module =3D THIS_MODULE, - } -}, { - .digestsize =3D SHA224_DIGEST_SIZE, - .init =3D sha224_base_init, - .update =3D sha256_ssse3_update, - .finup =3D sha256_ssse3_finup, - .descsize =3D sizeof(struct crypto_sha256_state), - .base =3D { - .cra_name =3D "sha224", - .cra_driver_name =3D "sha224-ssse3", - .cra_priority =3D 150, - .cra_flags =3D CRYPTO_AHASH_ALG_BLOCK_ONLY | - CRYPTO_AHASH_ALG_FINUP_MAX, - .cra_blocksize =3D SHA224_BLOCK_SIZE, - .cra_module =3D THIS_MODULE, - } -} }; - -static int register_sha256_ssse3(void) -{ - if (boot_cpu_has(X86_FEATURE_SSSE3)) - return crypto_register_shashes(sha256_ssse3_algs, - ARRAY_SIZE(sha256_ssse3_algs)); - return 0; -} - -static void unregister_sha256_ssse3(void) -{ - if (boot_cpu_has(X86_FEATURE_SSSE3)) - crypto_unregister_shashes(sha256_ssse3_algs, - ARRAY_SIZE(sha256_ssse3_algs)); -} - -asmlinkage void sha256_transform_avx(struct crypto_sha256_state *state, - const u8 *data, int blocks); - -static int sha256_avx_update(struct shash_desc *desc, const u8 *data, - unsigned int len) -{ - return _sha256_update(desc, data, len, sha256_transform_avx); -} - -static int sha256_avx_finup(struct shash_desc *desc, const u8 *data, - unsigned int len, u8 *out) -{ - return sha256_finup(desc, data, len, out, sha256_transform_avx); -} - -static int sha256_avx_digest(struct shash_desc *desc, const u8 *data, - unsigned int len, u8 *out) -{ - return sha256_base_init(desc) ?: - sha256_avx_finup(desc, data, len, out); -} - -static struct shash_alg sha256_avx_algs[] =3D { { - .digestsize =3D SHA256_DIGEST_SIZE, - .init =3D sha256_base_init, - .update =3D sha256_avx_update, - .finup =3D sha256_avx_finup, - .digest =3D sha256_avx_digest, - .descsize =3D sizeof(struct crypto_sha256_state), - .base =3D { - .cra_name =3D "sha256", - .cra_driver_name =3D "sha256-avx", - .cra_priority =3D 160, - .cra_flags =3D CRYPTO_AHASH_ALG_BLOCK_ONLY | - CRYPTO_AHASH_ALG_FINUP_MAX, - .cra_blocksize =3D SHA256_BLOCK_SIZE, - .cra_module =3D THIS_MODULE, - } -}, { - .digestsize =3D SHA224_DIGEST_SIZE, - .init =3D sha224_base_init, - .update =3D sha256_avx_update, - .finup =3D sha256_avx_finup, - .descsize =3D sizeof(struct crypto_sha256_state), - .base =3D { - .cra_name =3D "sha224", - .cra_driver_name =3D "sha224-avx", - .cra_priority =3D 160, - .cra_flags =3D CRYPTO_AHASH_ALG_BLOCK_ONLY | - CRYPTO_AHASH_ALG_FINUP_MAX, - .cra_blocksize =3D SHA224_BLOCK_SIZE, - .cra_module =3D THIS_MODULE, - } -} }; - -static bool avx_usable(void) -{ - if (!cpu_has_xfeatures(XFEATURE_MASK_SSE | XFEATURE_MASK_YMM, NULL)) { - if (boot_cpu_has(X86_FEATURE_AVX)) - pr_info("AVX detected but unusable.\n"); - return false; - } - - return true; -} - -static int register_sha256_avx(void) -{ - if (avx_usable()) - return crypto_register_shashes(sha256_avx_algs, - ARRAY_SIZE(sha256_avx_algs)); - return 0; -} - -static void unregister_sha256_avx(void) -{ - if (avx_usable()) - crypto_unregister_shashes(sha256_avx_algs, - ARRAY_SIZE(sha256_avx_algs)); -} - -asmlinkage void sha256_transform_rorx(struct crypto_sha256_state *state, - const u8 *data, int blocks); - -static int sha256_avx2_update(struct shash_desc *desc, const u8 *data, - unsigned int len) -{ - return _sha256_update(desc, data, len, sha256_transform_rorx); -} - -static int sha256_avx2_finup(struct shash_desc *desc, const u8 *data, - unsigned int len, u8 *out) -{ - return sha256_finup(desc, data, len, out, sha256_transform_rorx); -} - -static int sha256_avx2_digest(struct shash_desc *desc, const u8 *data, - unsigned int len, u8 *out) -{ - return sha256_base_init(desc) ?: - sha256_avx2_finup(desc, data, len, out); -} - -static struct shash_alg sha256_avx2_algs[] =3D { { - .digestsize =3D SHA256_DIGEST_SIZE, - .init =3D sha256_base_init, - .update =3D sha256_avx2_update, - .finup =3D sha256_avx2_finup, - .digest =3D sha256_avx2_digest, - .descsize =3D sizeof(struct crypto_sha256_state), - .base =3D { - .cra_name =3D "sha256", - .cra_driver_name =3D "sha256-avx2", - .cra_priority =3D 170, - .cra_flags =3D CRYPTO_AHASH_ALG_BLOCK_ONLY | - CRYPTO_AHASH_ALG_FINUP_MAX, - .cra_blocksize =3D SHA256_BLOCK_SIZE, - .cra_module =3D THIS_MODULE, - } -}, { - .digestsize =3D SHA224_DIGEST_SIZE, - .init =3D sha224_base_init, - .update =3D sha256_avx2_update, - .finup =3D sha256_avx2_finup, - .descsize =3D sizeof(struct crypto_sha256_state), - .base =3D { - .cra_name =3D "sha224", - .cra_driver_name =3D "sha224-avx2", - .cra_priority =3D 170, - .cra_flags =3D CRYPTO_AHASH_ALG_BLOCK_ONLY | - CRYPTO_AHASH_ALG_FINUP_MAX, - .cra_blocksize =3D SHA224_BLOCK_SIZE, - .cra_module =3D THIS_MODULE, - } -} }; - -static bool avx2_usable(void) -{ - if (avx_usable() && boot_cpu_has(X86_FEATURE_AVX2) && - boot_cpu_has(X86_FEATURE_BMI2)) - return true; - - return false; -} - -static int register_sha256_avx2(void) -{ - if (avx2_usable()) - return crypto_register_shashes(sha256_avx2_algs, - ARRAY_SIZE(sha256_avx2_algs)); - return 0; -} - -static void unregister_sha256_avx2(void) -{ - if (avx2_usable()) - crypto_unregister_shashes(sha256_avx2_algs, - ARRAY_SIZE(sha256_avx2_algs)); -} - -asmlinkage void sha256_ni_transform(struct crypto_sha256_state *digest, - const u8 *data, int rounds); - -static int sha256_ni_update(struct shash_desc *desc, const u8 *data, - unsigned int len) -{ - return _sha256_update(desc, data, len, sha256_ni_transform); -} - -static int sha256_ni_finup(struct shash_desc *desc, const u8 *data, - unsigned int len, u8 *out) -{ - return sha256_finup(desc, data, len, out, sha256_ni_transform); -} - -static int sha256_ni_digest(struct shash_desc *desc, const u8 *data, - unsigned int len, u8 *out) -{ - return sha256_base_init(desc) ?: - sha256_ni_finup(desc, data, len, out); -} - -static struct shash_alg sha256_ni_algs[] =3D { { - .digestsize =3D SHA256_DIGEST_SIZE, - .init =3D sha256_base_init, - .update =3D sha256_ni_update, - .finup =3D sha256_ni_finup, - .digest =3D sha256_ni_digest, - .descsize =3D sizeof(struct crypto_sha256_state), - .base =3D { - .cra_name =3D "sha256", - .cra_driver_name =3D "sha256-ni", - .cra_priority =3D 250, - .cra_flags =3D CRYPTO_AHASH_ALG_BLOCK_ONLY | - CRYPTO_AHASH_ALG_FINUP_MAX, - .cra_blocksize =3D SHA256_BLOCK_SIZE, - .cra_module =3D THIS_MODULE, - } -}, { - .digestsize =3D SHA224_DIGEST_SIZE, - .init =3D sha224_base_init, - .update =3D sha256_ni_update, - .finup =3D sha256_ni_finup, - .descsize =3D sizeof(struct crypto_sha256_state), - .base =3D { - .cra_name =3D "sha224", - .cra_driver_name =3D "sha224-ni", - .cra_priority =3D 250, - .cra_flags =3D CRYPTO_AHASH_ALG_BLOCK_ONLY | - CRYPTO_AHASH_ALG_FINUP_MAX, - .cra_blocksize =3D SHA224_BLOCK_SIZE, - .cra_module =3D THIS_MODULE, - } -} }; - -static int register_sha256_ni(void) -{ - if (boot_cpu_has(X86_FEATURE_SHA_NI)) - return crypto_register_shashes(sha256_ni_algs, - ARRAY_SIZE(sha256_ni_algs)); - return 0; -} - -static void unregister_sha256_ni(void) -{ - if (boot_cpu_has(X86_FEATURE_SHA_NI)) - crypto_unregister_shashes(sha256_ni_algs, - ARRAY_SIZE(sha256_ni_algs)); -} - -static int __init sha256_ssse3_mod_init(void) -{ - if (!x86_match_cpu(module_cpu_ids)) - return -ENODEV; - - if (register_sha256_ssse3()) - goto fail; - - if (register_sha256_avx()) { - unregister_sha256_ssse3(); - goto fail; - } - - if (register_sha256_avx2()) { - unregister_sha256_avx(); - unregister_sha256_ssse3(); - goto fail; - } - - if (register_sha256_ni()) { - unregister_sha256_avx2(); - unregister_sha256_avx(); - unregister_sha256_ssse3(); - goto fail; - } - - return 0; -fail: - return -ENODEV; -} - -static void __exit sha256_ssse3_mod_fini(void) -{ - unregister_sha256_ni(); - unregister_sha256_avx2(); - unregister_sha256_avx(); - unregister_sha256_ssse3(); -} - -module_init(sha256_ssse3_mod_init); -module_exit(sha256_ssse3_mod_fini); - -MODULE_LICENSE("GPL"); -MODULE_DESCRIPTION("SHA256 Secure Hash Algorithm, Supplemental SSE3 accele= rated"); - -MODULE_ALIAS_CRYPTO("sha256"); -MODULE_ALIAS_CRYPTO("sha256-ssse3"); -MODULE_ALIAS_CRYPTO("sha256-avx"); -MODULE_ALIAS_CRYPTO("sha256-avx2"); -MODULE_ALIAS_CRYPTO("sha224"); -MODULE_ALIAS_CRYPTO("sha224-ssse3"); -MODULE_ALIAS_CRYPTO("sha224-avx"); -MODULE_ALIAS_CRYPTO("sha224-avx2"); -MODULE_ALIAS_CRYPTO("sha256-ni"); -MODULE_ALIAS_CRYPTO("sha224-ni"); diff --git a/arch/x86/lib/crypto/Kconfig b/arch/x86/lib/crypto/Kconfig index 546fe2afe0b51..e344579db3d85 100644 --- a/arch/x86/lib/crypto/Kconfig +++ b/arch/x86/lib/crypto/Kconfig @@ -22,5 +22,12 @@ config CRYPTO_CHACHA20_X86_64 config CRYPTO_POLY1305_X86_64 tristate depends on 64BIT default CRYPTO_LIB_POLY1305 select CRYPTO_ARCH_HAVE_LIB_POLY1305 + +config CRYPTO_SHA256_X86_64 + tristate + depends on 64BIT + default CRYPTO_LIB_SHA256 + select CRYPTO_ARCH_HAVE_LIB_SHA256 + select CRYPTO_LIB_SHA256_GENERIC diff --git a/arch/x86/lib/crypto/Makefile b/arch/x86/lib/crypto/Makefile index c2ff8c5f1046e..abceca3d31c01 100644 --- a/arch/x86/lib/crypto/Makefile +++ b/arch/x86/lib/crypto/Makefile @@ -8,10 +8,13 @@ chacha-x86_64-y :=3D chacha-avx2-x86_64.o chacha-ssse3-x8= 6_64.o chacha-avx512vl-x8 =20 obj-$(CONFIG_CRYPTO_POLY1305_X86_64) +=3D poly1305-x86_64.o poly1305-x86_64-y :=3D poly1305-x86_64-cryptogams.o poly1305_glue.o targets +=3D poly1305-x86_64-cryptogams.S =20 +obj-$(CONFIG_CRYPTO_SHA256_X86_64) +=3D sha256-x86_64.o +sha256-x86_64-y :=3D sha256.o sha256-ssse3-asm.o sha256-avx-asm.o sha256-a= vx2-asm.o sha256-ni-asm.o + quiet_cmd_perlasm =3D PERLASM $@ cmd_perlasm =3D $(PERL) $< > $@ =20 $(obj)/%.S: $(src)/%.pl FORCE $(call if_changed,perlasm) diff --git a/arch/x86/crypto/sha256-avx-asm.S b/arch/x86/lib/crypto/sha256-= avx-asm.S similarity index 98% rename from arch/x86/crypto/sha256-avx-asm.S rename to arch/x86/lib/crypto/sha256-avx-asm.S index 53de72bdd851e..0d7b2c3e45d9a 100644 --- a/arch/x86/crypto/sha256-avx-asm.S +++ b/arch/x86/lib/crypto/sha256-avx-asm.S @@ -46,11 +46,11 @@ ######################################################################## # This code schedules 1 block at a time, with 4 lanes per block ######################################################################## =20 #include -#include +#include =20 ## assume buffers not aligned #define VMOVDQ vmovdqu =20 ################################ Define Macros @@ -339,17 +339,17 @@ a =3D TMP_ add y0, h # h =3D h + S1 + CH + k + w + S0 += MAJ ROTATE_ARGS .endm =20 ######################################################################## -## void sha256_transform_avx(state sha256_state *state, const u8 *data, in= t blocks) -## arg 1 : pointer to state -## arg 2 : pointer to input data -## arg 3 : Num blocks +## void sha256_transform_avx(u32 state[SHA256_STATE_WORDS], +## const u8 *data, size_t nblocks); ######################################################################## .text -SYM_TYPED_FUNC_START(sha256_transform_avx) +SYM_FUNC_START(sha256_transform_avx) + ANNOTATE_NOENDBR # since this is called only via static_call + pushq %rbx pushq %r12 pushq %r13 pushq %r14 pushq %r15 diff --git a/arch/x86/crypto/sha256-avx2-asm.S b/arch/x86/lib/crypto/sha256= -avx2-asm.S similarity index 98% rename from arch/x86/crypto/sha256-avx2-asm.S rename to arch/x86/lib/crypto/sha256-avx2-asm.S index 0bbec1c75cd0b..25d3380321ec3 100644 --- a/arch/x86/crypto/sha256-avx2-asm.S +++ b/arch/x86/lib/crypto/sha256-avx2-asm.S @@ -47,11 +47,11 @@ ######################################################################## # This code schedules 2 blocks at a time, with 4 lanes per block ######################################################################## =20 #include -#include +#include =20 ## assume buffers not aligned #define VMOVDQ vmovdqu =20 ################################ Define Macros @@ -516,17 +516,17 @@ STACK_SIZE =3D _CTX + _CTX_SIZE ROTATE_ARGS =20 .endm =20 ######################################################################## -## void sha256_transform_rorx(struct sha256_state *state, const u8 *data, = int blocks) -## arg 1 : pointer to state -## arg 2 : pointer to input data -## arg 3 : Num blocks +## void sha256_transform_rorx(u32 state[SHA256_STATE_WORDS], +## const u8 *data, size_t nblocks); ######################################################################## .text -SYM_TYPED_FUNC_START(sha256_transform_rorx) +SYM_FUNC_START(sha256_transform_rorx) + ANNOTATE_NOENDBR # since this is called only via static_call + pushq %rbx pushq %r12 pushq %r13 pushq %r14 pushq %r15 diff --git a/arch/x86/crypto/sha256_ni_asm.S b/arch/x86/lib/crypto/sha256-n= i-asm.S similarity index 85% rename from arch/x86/crypto/sha256_ni_asm.S rename to arch/x86/lib/crypto/sha256-ni-asm.S index d515a55a3bc1d..d3548206cf3d4 100644 --- a/arch/x86/crypto/sha256_ni_asm.S +++ b/arch/x86/lib/crypto/sha256-ni-asm.S @@ -52,13 +52,13 @@ * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. * */ =20 #include -#include +#include =20 -#define DIGEST_PTR %rdi /* 1st arg */ +#define STATE_PTR %rdi /* 1st arg */ #define DATA_PTR %rsi /* 2nd arg */ #define NUM_BLKS %rdx /* 3rd arg */ =20 #define SHA256CONSTANTS %rax =20 @@ -96,40 +96,36 @@ sha256msg1 \m0, \m3 .endif .endm =20 /* - * Intel SHA Extensions optimized implementation of a SHA-256 update funct= ion + * Intel SHA Extensions optimized implementation of a SHA-256 block functi= on * - * The function takes a pointer to the current hash values, a pointer to t= he - * input data, and a number of 64 byte blocks to process. Once all blocks= have - * been processed, the digest pointer is updated with the resulting hash = value. - * The function only processes complete blocks, there is no functionality = to - * store partial blocks. All message padding and hash value initializatio= n must - * be done outside the update function. + * This function takes a pointer to the current SHA-256 state, a pointer t= o the + * input data, and the number of 64-byte blocks to process. Once all bloc= ks + * have been processed, the state is updated with the new state. This fun= ction + * only processes complete blocks. State initialization, buffering of par= tial + * blocks, and digest finalization is expected to be handled elsewhere. * - * void sha256_ni_transform(uint32_t *digest, const void *data, - uint32_t numBlocks); - * digest : pointer to digest - * data: pointer to input data - * numBlocks: Number of blocks to process + * void sha256_ni_transform(u32 state[SHA256_STATE_WORDS], + * const u8 *data, size_t nblocks); */ - .text -SYM_TYPED_FUNC_START(sha256_ni_transform) +SYM_FUNC_START(sha256_ni_transform) + ANNOTATE_NOENDBR # since this is called only via static_call =20 shl $6, NUM_BLKS /* convert to bytes */ jz .Ldone_hash add DATA_PTR, NUM_BLKS /* pointer to end of data */ =20 /* * load initial hash values * Need to reorder these appropriately * DCBA, HGFE -> ABEF, CDGH */ - movdqu 0*16(DIGEST_PTR), STATE0 /* DCBA */ - movdqu 1*16(DIGEST_PTR), STATE1 /* HGFE */ + movdqu 0*16(STATE_PTR), STATE0 /* DCBA */ + movdqu 1*16(STATE_PTR), STATE1 /* HGFE */ =20 movdqa STATE0, TMP punpcklqdq STATE1, STATE0 /* FEBA */ punpckhqdq TMP, STATE1 /* DCHG */ pshufd $0x1B, STATE0, STATE0 /* ABEF */ @@ -164,12 +160,12 @@ SYM_TYPED_FUNC_START(sha256_ni_transform) punpcklqdq STATE1, STATE0 /* GHEF */ punpckhqdq TMP, STATE1 /* ABCD */ pshufd $0xB1, STATE0, STATE0 /* HGFE */ pshufd $0x1B, STATE1, STATE1 /* DCBA */ =20 - movdqu STATE1, 0*16(DIGEST_PTR) - movdqu STATE0, 1*16(DIGEST_PTR) + movdqu STATE1, 0*16(STATE_PTR) + movdqu STATE0, 1*16(STATE_PTR) =20 .Ldone_hash: =20 RET SYM_FUNC_END(sha256_ni_transform) diff --git a/arch/x86/crypto/sha256-ssse3-asm.S b/arch/x86/lib/crypto/sha25= 6-ssse3-asm.S similarity index 98% rename from arch/x86/crypto/sha256-ssse3-asm.S rename to arch/x86/lib/crypto/sha256-ssse3-asm.S index 93264ee445432..7f24a4cdcb257 100644 --- a/arch/x86/crypto/sha256-ssse3-asm.S +++ b/arch/x86/lib/crypto/sha256-ssse3-asm.S @@ -45,11 +45,11 @@ # and search for that title. # ######################################################################## =20 #include -#include +#include =20 ## assume buffers not aligned #define MOVDQ movdqu =20 ################################ Define Macros @@ -346,19 +346,17 @@ a =3D TMP_ add y0, h # h =3D h + S1 + CH + k + w + S0 + MAJ ROTATE_ARGS .endm =20 ######################################################################## -## void sha256_transform_ssse3(struct sha256_state *state, const u8 *data, -## int blocks); -## arg 1 : pointer to state -## (struct sha256_state is assumed to begin with u32 state[8]) -## arg 2 : pointer to input data -## arg 3 : Num blocks +## void sha256_transform_ssse3(u32 state[SHA256_STATE_WORDS], +## const u8 *data, size_t nblocks); ######################################################################## .text -SYM_TYPED_FUNC_START(sha256_transform_ssse3) +SYM_FUNC_START(sha256_transform_ssse3) + ANNOTATE_NOENDBR # since this is called only via static_call + pushq %rbx pushq %r12 pushq %r13 pushq %r14 pushq %r15 diff --git a/arch/x86/lib/crypto/sha256.c b/arch/x86/lib/crypto/sha256.c new file mode 100644 index 0000000000000..47865b5cd94be --- /dev/null +++ b/arch/x86/lib/crypto/sha256.c @@ -0,0 +1,74 @@ +// SPDX-License-Identifier: GPL-2.0-or-later +/* + * SHA-256 optimized for x86_64 + * + * Copyright 2025 Google LLC + */ +#include +#include +#include +#include +#include +#include + +asmlinkage void sha256_transform_ssse3(u32 state[SHA256_STATE_WORDS], + const u8 *data, size_t nblocks); +asmlinkage void sha256_transform_avx(u32 state[SHA256_STATE_WORDS], + const u8 *data, size_t nblocks); +asmlinkage void sha256_transform_rorx(u32 state[SHA256_STATE_WORDS], + const u8 *data, size_t nblocks); +asmlinkage void sha256_ni_transform(u32 state[SHA256_STATE_WORDS], + const u8 *data, size_t nblocks); + +static __ro_after_init DEFINE_STATIC_KEY_FALSE(have_sha256_x86); + +DEFINE_STATIC_CALL(sha256_blocks_x86, sha256_transform_ssse3); + +void sha256_blocks_arch(u32 state[SHA256_STATE_WORDS], + const u8 *data, size_t nblocks) +{ + if (static_branch_likely(&have_sha256_x86) && crypto_simd_usable()) { + kernel_fpu_begin(); + static_call(sha256_blocks_x86)(state, data, nblocks); + kernel_fpu_end(); + } else { + sha256_blocks_generic(state, data, nblocks); + } +} +EXPORT_SYMBOL(sha256_blocks_arch); + +bool sha256_is_arch_optimized(void) +{ + return static_key_enabled(&have_sha256_x86); +} +EXPORT_SYMBOL(sha256_is_arch_optimized); + +static int __init sha256_x86_mod_init(void) +{ + if (boot_cpu_has(X86_FEATURE_SHA_NI)) { + static_call_update(sha256_blocks_x86, sha256_ni_transform); + } else if (cpu_has_xfeatures(XFEATURE_MASK_SSE | + XFEATURE_MASK_YMM, NULL) && + boot_cpu_has(X86_FEATURE_AVX)) { + if (boot_cpu_has(X86_FEATURE_AVX2) && + boot_cpu_has(X86_FEATURE_BMI2)) + static_call_update(sha256_blocks_x86, + sha256_transform_rorx); + else + static_call_update(sha256_blocks_x86, + sha256_transform_avx); + } else if (!boot_cpu_has(X86_FEATURE_SSSE3)) { + return 0; + } + static_branch_enable(&have_sha256_x86); + return 0; +} +arch_initcall(sha256_x86_mod_init); + +static void __exit sha256_x86_mod_exit(void) +{ +} +module_exit(sha256_x86_mod_exit); + +MODULE_LICENSE("GPL"); +MODULE_DESCRIPTION("SHA-256 optimized for x86_64"); --=20 2.49.0 From nobody Sat Feb 7 05:57:37 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0355C201013; Sat, 26 Apr 2025 06:51:45 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1745650306; cv=none; b=tD6G27GALhsIvshdiuofVCiX8uCw7Dr1A4/QZtGPEOIz8sCM5Smy5nbCixoi+1XUG6w9g9oqYM5WNRajK3o3j20B5YXynn5OVHy3aW0WigFCjgNLDykKr6YAEtiDEIbLooHg1ifls0YeLUXsIo/N5c4An0VdTTCLR/BNcATbS8s= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1745650306; c=relaxed/simple; bh=nXzVAsBZCcgzdRMgpyZqkLVLDfQ1jazwy8tBAdc6LUA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=PmYlu9WRnuCdLsSSKGpEjWYnVPiZ+roKgcrJrIh2LKIxSyIudHFcScdEcvUJp6vX5pKhydEtGn35KgVaw0dFC+4on4MOnys4ZqJX6nsFqYi7W3lUi+JjxmBAKd3xyny/92UqZzG3oFLuw3KTGN8SvHq5uDeUvoUSbZ1ss6SflSY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=Oq8ssxQ2; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="Oq8ssxQ2" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7D10DC4CEEE; Sat, 26 Apr 2025 06:51:45 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1745650305; bh=nXzVAsBZCcgzdRMgpyZqkLVLDfQ1jazwy8tBAdc6LUA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Oq8ssxQ2GMaLbsEayviAIoUFEUGfQTZgnJtQuqcFykH5letAis0NfyzrSfsDuCSzo 3mjZPq2Rg/VizvELuxhjNjHoKOZvJtjb1PIujZYuKSFuKStz3U0MPdqHOwbbQREGQ9 RKBtmb4JrJv4gbuQzfPxyw4cFZz9wDYz+hnL7WN1vs4zNd/n1BYh4YpwtmxkIqHhVO ae99PXdIm5XQN9iaiar/CL1gYBEuX7tFehMVv2p7F2QMEAy0VCg9iI/QThbYJRjWoZ wyJbqbe5imZKLIKNTFpqetItMSvvRIFT20nKYXB5P/2frtOW8Llhw0sPKFzR5Rbtx1 ulF6OeRhV8H7Q== From: Eric Biggers To: linux-crypto@vger.kernel.org Cc: linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, sparclinux@vger.kernel.org, linux-s390@vger.kernel.org, x86@kernel.org, Ard Biesheuvel , "Jason A . Donenfeld " , Linus Torvalds Subject: [PATCH 12/13] crypto: sha256 - remove sha256_base.h Date: Fri, 25 Apr 2025 23:50:38 -0700 Message-ID: <20250426065041.1551914-13-ebiggers@kernel.org> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250426065041.1551914-1-ebiggers@kernel.org> References: <20250426065041.1551914-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Eric Biggers sha256_base.h is no longer used, so remove it. Signed-off-by: Eric Biggers --- include/crypto/sha256_base.h | 183 ----------------------------------- 1 file changed, 183 deletions(-) delete mode 100644 include/crypto/sha256_base.h diff --git a/include/crypto/sha256_base.h b/include/crypto/sha256_base.h deleted file mode 100644 index 6878fb9c26c04..0000000000000 --- a/include/crypto/sha256_base.h +++ /dev/null @@ -1,183 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0-only */ -/* - * sha256_base.h - core logic for SHA-256 implementations - * - * Copyright (C) 2015 Linaro Ltd - */ - -#ifndef _CRYPTO_SHA256_BASE_H -#define _CRYPTO_SHA256_BASE_H - -#include -#include -#include -#include -#include -#include - -typedef void (sha256_block_fn)(struct crypto_sha256_state *sst, u8 const *= src, - int blocks); - -static inline int sha224_base_init(struct shash_desc *desc) -{ - struct sha256_state *sctx =3D shash_desc_ctx(desc); - - sha224_init(sctx); - return 0; -} - -static inline int sha256_base_init(struct shash_desc *desc) -{ - struct sha256_state *sctx =3D shash_desc_ctx(desc); - - sha256_init(sctx); - return 0; -} - -static inline int lib_sha256_base_do_update(struct sha256_state *sctx, - const u8 *data, - unsigned int len, - sha256_block_fn *block_fn) -{ - unsigned int partial =3D sctx->count % SHA256_BLOCK_SIZE; - struct crypto_sha256_state *state =3D (void *)sctx; - - sctx->count +=3D len; - - if (unlikely((partial + len) >=3D SHA256_BLOCK_SIZE)) { - int blocks; - - if (partial) { - int p =3D SHA256_BLOCK_SIZE - partial; - - memcpy(sctx->buf + partial, data, p); - data +=3D p; - len -=3D p; - - block_fn(state, sctx->buf, 1); - } - - blocks =3D len / SHA256_BLOCK_SIZE; - len %=3D SHA256_BLOCK_SIZE; - - if (blocks) { - block_fn(state, data, blocks); - data +=3D blocks * SHA256_BLOCK_SIZE; - } - partial =3D 0; - } - if (len) - memcpy(sctx->buf + partial, data, len); - - return 0; -} - -static inline int lib_sha256_base_do_update_blocks( - struct crypto_sha256_state *sctx, const u8 *data, unsigned int len, - sha256_block_fn *block_fn) -{ - unsigned int remain =3D len - round_down(len, SHA256_BLOCK_SIZE); - - sctx->count +=3D len - remain; - block_fn(sctx, data, len / SHA256_BLOCK_SIZE); - return remain; -} - -static inline int sha256_base_do_update_blocks( - struct shash_desc *desc, const u8 *data, unsigned int len, - sha256_block_fn *block_fn) -{ - return lib_sha256_base_do_update_blocks(shash_desc_ctx(desc), data, - len, block_fn); -} - -static inline int lib_sha256_base_do_finup(struct crypto_sha256_state *sct= x, - const u8 *src, unsigned int len, - sha256_block_fn *block_fn) -{ - unsigned int bit_offset =3D SHA256_BLOCK_SIZE / 8 - 1; - union { - __be64 b64[SHA256_BLOCK_SIZE / 4]; - u8 u8[SHA256_BLOCK_SIZE * 2]; - } block =3D {}; - - if (len >=3D bit_offset * 8) - bit_offset +=3D SHA256_BLOCK_SIZE / 8; - memcpy(&block, src, len); - block.u8[len] =3D 0x80; - sctx->count +=3D len; - block.b64[bit_offset] =3D cpu_to_be64(sctx->count << 3); - block_fn(sctx, block.u8, (bit_offset + 1) * 8 / SHA256_BLOCK_SIZE); - memzero_explicit(&block, sizeof(block)); - - return 0; -} - -static inline int sha256_base_do_finup(struct shash_desc *desc, - const u8 *src, unsigned int len, - sha256_block_fn *block_fn) -{ - struct crypto_sha256_state *sctx =3D shash_desc_ctx(desc); - - if (len >=3D SHA256_BLOCK_SIZE) { - int remain; - - remain =3D lib_sha256_base_do_update_blocks(sctx, src, len, - block_fn); - src +=3D len - remain; - len =3D remain; - } - return lib_sha256_base_do_finup(sctx, src, len, block_fn); -} - -static inline int lib_sha256_base_do_finalize(struct sha256_state *sctx, - sha256_block_fn *block_fn) -{ - unsigned int partial =3D sctx->count % SHA256_BLOCK_SIZE; - struct crypto_sha256_state *state =3D (void *)sctx; - - sctx->count -=3D partial; - return lib_sha256_base_do_finup(state, sctx->buf, partial, block_fn); -} - -static inline int sha256_base_do_finalize(struct shash_desc *desc, - sha256_block_fn *block_fn) -{ - struct sha256_state *sctx =3D shash_desc_ctx(desc); - - return lib_sha256_base_do_finalize(sctx, block_fn); -} - -static inline int __sha256_base_finish(u32 state[SHA256_DIGEST_SIZE / 4], - u8 *out, unsigned int digest_size) -{ - __be32 *digest =3D (__be32 *)out; - int i; - - for (i =3D 0; digest_size > 0; i++, digest_size -=3D sizeof(__be32)) - put_unaligned_be32(state[i], digest++); - return 0; -} - -static inline void lib_sha256_base_finish(struct sha256_state *sctx, u8 *o= ut, - unsigned int digest_size) -{ - __sha256_base_finish(sctx->state, out, digest_size); - memzero_explicit(sctx, sizeof(*sctx)); -} - -static inline int sha256_base_finish(struct shash_desc *desc, u8 *out) -{ - unsigned int digest_size =3D crypto_shash_digestsize(desc->tfm); - struct crypto_sha256_state *sctx =3D shash_desc_ctx(desc); - - return __sha256_base_finish(sctx->state, out, digest_size); -} - -static inline void sha256_transform_blocks(struct crypto_sha256_state *sst, - const u8 *input, int blocks) -{ - sha256_blocks_generic(sst->state, input, blocks); -} - -#endif /* _CRYPTO_SHA256_BASE_H */ --=20 2.49.0 From nobody Sat Feb 7 05:57:37 2026 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 84F522036FE; Sat, 26 Apr 2025 06:51:46 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1745650306; cv=none; b=q9ST3ajb3bpXj2W2AcUMCrWgUFuxNE3+fRi85Ic9JS1QQjayUXMeRhG+d5x4fe0KhI31GqxZ2A8wfkbkIgLRf+NLZ0EoMxP3AZXYx5A0Eoc6JOmYpTuA8CgZiF9GQWd8qqTHJQKq/C7HntUSAHdrpHuhCfsboMzZvWEvyOmi1mQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1745650306; c=relaxed/simple; bh=uxkZytCeZvKEdMW9eFrdi+rqyJFN6GhaSoUePQ/ntfI=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=ACiguzyKiplXs8G92wFz7SHnrkLwo3cSiK7qiXe3dYx07Hes7+lyhjlW3eElq//oe2QRgRAD6imoVPGH9jih9Vt3vt5D3uRTK2/tNNvyqfueJlyEu2ZkE5FlUD6tqOUMn13sxjPvpGiB5zApMscUMRYXuekkFjHNQiSNDeDGS74= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=HilrkFzM; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="HilrkFzM" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 09417C4CEED; Sat, 26 Apr 2025 06:51:46 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1745650306; bh=uxkZytCeZvKEdMW9eFrdi+rqyJFN6GhaSoUePQ/ntfI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=HilrkFzMypaE/jIuU0kTXgtftC4qBLOTw4seM8HhEu/vbFizNFfHH6KMuDCWnG6Q/ KZPP3M7aruiQPHkimuNafc0ZgceaGQVq7ihnupITN9O6kx0XqsuuUAkEzbVAzyOYeX eQyqtz08tOMEv+wvVdrqGo0qjmnc6yj+Mk9W+BXiXEPwTGvG4hdgUi29/4kuoy8hmw CxUKHIyOuWc/K/8MGHFuDPyXBSUbyq41L95FNpDiYx9Vo0epwwRv7r8GF9cSfKtMBQ cVCOCOElXPtNz7aRifsPpDm9k1hZ8uJ05EZZUo+ycNHIvMt3kVa6z4k3RpSwuY1UqN X3XoiFpLXmAJA== From: Eric Biggers To: linux-crypto@vger.kernel.org Cc: linux-kernel@vger.kernel.org, linux-arch@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, sparclinux@vger.kernel.org, linux-s390@vger.kernel.org, x86@kernel.org, Ard Biesheuvel , "Jason A . Donenfeld " , Linus Torvalds Subject: [PATCH 13/13] crypto: lib/sha256 - improve function prototypes Date: Fri, 25 Apr 2025 23:50:39 -0700 Message-ID: <20250426065041.1551914-14-ebiggers@kernel.org> X-Mailer: git-send-email 2.49.0 In-Reply-To: <20250426065041.1551914-1-ebiggers@kernel.org> References: <20250426065041.1551914-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" From: Eric Biggers Follow best practices by changing the length parameters to size_t and explicitly specifying the length of the output digest arrays. Signed-off-by: Eric Biggers --- include/crypto/sha2.h | 8 ++++---- lib/crypto/sha256.c | 8 ++++---- 2 files changed, 8 insertions(+), 8 deletions(-) diff --git a/include/crypto/sha2.h b/include/crypto/sha2.h index 444484d1b1cfa..7dfc560daa2c7 100644 --- a/include/crypto/sha2.h +++ b/include/crypto/sha2.h @@ -93,13 +93,13 @@ static inline void sha256_init(struct sha256_state *sct= x) sctx->state[5] =3D SHA256_H5; sctx->state[6] =3D SHA256_H6; sctx->state[7] =3D SHA256_H7; sctx->count =3D 0; } -void sha256_update(struct sha256_state *sctx, const u8 *data, unsigned int= len); -void sha256_final(struct sha256_state *sctx, u8 *out); -void sha256(const u8 *data, unsigned int len, u8 *out); +void sha256_update(struct sha256_state *sctx, const u8 *data, size_t len); +void sha256_final(struct sha256_state *sctx, u8 out[SHA256_DIGEST_SIZE]); +void sha256(const u8 *data, size_t len, u8 out[SHA256_DIGEST_SIZE]); =20 static inline void sha224_init(struct sha256_state *sctx) { sctx->state[0] =3D SHA224_H0; sctx->state[1] =3D SHA224_H1; @@ -110,8 +110,8 @@ static inline void sha224_init(struct sha256_state *sct= x) sctx->state[6] =3D SHA224_H6; sctx->state[7] =3D SHA224_H7; sctx->count =3D 0; } /* Simply use sha256_update as it is equivalent to sha224_update. */ -void sha224_final(struct sha256_state *sctx, u8 *out); +void sha224_final(struct sha256_state *sctx, u8 out[SHA224_DIGEST_SIZE]); =20 #endif /* _CRYPTO_SHA2_H */ diff --git a/lib/crypto/sha256.c b/lib/crypto/sha256.c index 182d1088d8893..50b7eeac2d89e 100644 --- a/lib/crypto/sha256.c +++ b/lib/crypto/sha256.c @@ -182,11 +182,11 @@ static inline void __sha256_update(struct sha256_stat= e *sctx, const u8 *data, } if (len) memcpy(&sctx->buf[partial], data, len); } =20 -void sha256_update(struct sha256_state *sctx, const u8 *data, unsigned int= len) +void sha256_update(struct sha256_state *sctx, const u8 *data, size_t len) { __sha256_update(sctx, data, len, false); } EXPORT_SYMBOL(sha256_update); =20 @@ -213,23 +213,23 @@ static inline void __sha256_final(struct sha256_state= *sctx, u8 *out, put_unaligned_be32(sctx->state[i / 4], out + i); =20 memzero_explicit(sctx, sizeof(*sctx)); } =20 -void sha256_final(struct sha256_state *sctx, u8 *out) +void sha256_final(struct sha256_state *sctx, u8 out[SHA256_DIGEST_SIZE]) { __sha256_final(sctx, out, SHA256_DIGEST_SIZE, false); } EXPORT_SYMBOL(sha256_final); =20 -void sha224_final(struct sha256_state *sctx, u8 *out) +void sha224_final(struct sha256_state *sctx, u8 out[SHA224_DIGEST_SIZE]) { __sha256_final(sctx, out, SHA224_DIGEST_SIZE, false); } EXPORT_SYMBOL(sha224_final); =20 -void sha256(const u8 *data, unsigned int len, u8 *out) +void sha256(const u8 *data, size_t len, u8 out[SHA256_DIGEST_SIZE]) { struct sha256_state sctx; =20 sha256_init(&sctx); sha256_update(&sctx, data, len); --=20 2.49.0