From nobody Sun Sep 7 11:31:36 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 081E11E9B23; Wed, 25 Jun 2025 07:10:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750835428; cv=none; b=U/OAcjxnXgX0XOsDIHQCdDzbpBY6byFqF2VM/HreP3wf/MEFqQ7fLuUcB+OHC3ZhKfWsIQw4VJiL5bbl7cyI4kNeXOuciHiWiaZu3ZPNodWMKVhbOF7cFPZ99WCJeuQlYA1dCMNBXqbKz8OfotEco8h28vhu6cWbY966HTZucvA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750835428; c=relaxed/simple; bh=oP/HTNYTl9LXpDk4PHssqzdxrIaWg9Dc4X7a2efoHe0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=SsklcGxccVgPvwNBrZyalSzG1jtlvLDsHQJ1pFERtcVO6ydXQtx+QZbgoLjRxZMqW8GARkI5o+AMfbFvcnQFC9PY1zdPpJj0Si5u/JzDoi3UfphlpPNTk05lxWASi5VnRhQ9MxtQ81WsHNUHH2cj/mezDH6t13d2g6kU6zZSWmg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=grOdLOtZ; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="grOdLOtZ" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3DC29C4AF09; Wed, 25 Jun 2025 07:10:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1750835427; bh=oP/HTNYTl9LXpDk4PHssqzdxrIaWg9Dc4X7a2efoHe0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=grOdLOtZgoIKmEWCLoLH/9c0vB0aNmsNV3SxcH0BZP5ljLtkHnx2+AKoNSsaq9vrv DUs8v9tEITVzaIhBG2hZl5pFHjoAU65Vwvo8XLLJNUhLYdlK9SUpvu0gRKUh3lVnmx xe7B9ojnICi0YfGfAQLPYVUBhPYUYbz115fGEy9isHKdHCYMvqY4oW6SNKWv4FJRs2 WjGEQ1c2vJ3eCDQIKmq8N+TBXe4uIfevRVsKQTU4aigzoOA6WXabW0iVDnRucYJXPc sldN4RMVz138oOya4ugf0A3vVhvZXhzWJ6hACVLHuujYIZAQvxI5l26FaziMpipYfV E4hbrlgSnTpIg== From: Eric Biggers To: linux-crypto@vger.kernel.org Cc: linux-kernel@vger.kernel.org, Ard Biesheuvel , "Jason A . Donenfeld" , linux-arm-kernel@lists.infradead.org, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, sparclinux@vger.kernel.org, x86@kernel.org, Eric Biggers Subject: [PATCH 01/18] libceph: Rename hmac_sha256() to ceph_hmac_sha256() Date: Wed, 25 Jun 2025 00:08:02 -0700 Message-ID: <20250625070819.1496119-2-ebiggers@kernel.org> X-Mailer: git-send-email 2.50.0 In-Reply-To: <20250625070819.1496119-1-ebiggers@kernel.org> References: <20250625070819.1496119-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Rename hmac_sha256() to ceph_hmac_sha256(), to avoid a naming conflict with the upcoming hmac_sha256() library function. This code will be able to use the HMAC-SHA256 library, but that's left for a later commit. Signed-off-by: Eric Biggers --- net/ceph/messenger_v2.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/net/ceph/messenger_v2.c b/net/ceph/messenger_v2.c index bd608ffa06279..5483b4eed94e1 100644 --- a/net/ceph/messenger_v2.c +++ b/net/ceph/messenger_v2.c @@ -791,12 +791,12 @@ static int setup_crypto(struct ceph_connection *con, con_secret + CEPH_GCM_KEY_LEN + CEPH_GCM_IV_LEN, CEPH_GCM_IV_LEN); return 0; /* auth_x, secure mode */ } =20 -static int hmac_sha256(struct ceph_connection *con, const struct kvec *kve= cs, - int kvec_cnt, u8 *hmac) +static int ceph_hmac_sha256(struct ceph_connection *con, + const struct kvec *kvecs, int kvec_cnt, u8 *hmac) { SHASH_DESC_ON_STACK(desc, con->v2.hmac_tfm); /* tfm arg is ignored */ int ret; int i; =20 @@ -1460,12 +1460,12 @@ static int prepare_auth_signature(struct ceph_conne= ction *con) buf =3D alloc_conn_buf(con, head_onwire_len(SHA256_DIGEST_SIZE, con_secure(con))); if (!buf) return -ENOMEM; =20 - ret =3D hmac_sha256(con, con->v2.in_sign_kvecs, con->v2.in_sign_kvec_cnt, - CTRL_BODY(buf)); + ret =3D ceph_hmac_sha256(con, con->v2.in_sign_kvecs, + con->v2.in_sign_kvec_cnt, CTRL_BODY(buf)); if (ret) return ret; =20 return prepare_control(con, FRAME_TAG_AUTH_SIGNATURE, buf, SHA256_DIGEST_SIZE); @@ -2458,12 +2458,12 @@ static int process_auth_signature(struct ceph_conne= ction *con, if (con->state !=3D CEPH_CON_S_V2_AUTH_SIGNATURE) { con->error_msg =3D "protocol error, unexpected auth_signature"; return -EINVAL; } =20 - ret =3D hmac_sha256(con, con->v2.out_sign_kvecs, - con->v2.out_sign_kvec_cnt, hmac); + ret =3D ceph_hmac_sha256(con, con->v2.out_sign_kvecs, + con->v2.out_sign_kvec_cnt, hmac); if (ret) return ret; =20 ceph_decode_need(&p, end, SHA256_DIGEST_SIZE, bad); if (crypto_memneq(p, hmac, SHA256_DIGEST_SIZE)) { --=20 2.50.0 From nobody Sun Sep 7 11:31:36 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 35DD21FDE39; Wed, 25 Jun 2025 07:10:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750835428; cv=none; b=jR5j5Nki4wnD9+nXVlw9O03eWpO1gQ7O06EkYgc3q7vjmTokZgoC/WpVUKsi0dU0OvC3K6tDwKodaKEoP80oKhf+OwvpVsqLzjUq3i50TuxtOKwJASO5zbVo+sCXsPzRGcp2cR8w+Y7heD1FGWq9KO1KKcIQYFtljMRv7U9IKI8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750835428; c=relaxed/simple; bh=UROoErFHCYCaDPqAxDhc4AhSGoASaVjnImGlpaYhSWw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=UZ8KXpj6/nNvEwYL3mKltoWB9R2ambEcxBJsLQLXFFZ23jybrTJvrwNHX8Cg2rUAsV0hyXT8K4yx26NnAhzHzDE0h226YfPja7ue5o7wlEik3ikbRcliUn3yD+pcGZ8B4gUfMao2YflR78LCQD49wV0K7y9hXVsfUq4xkcEqgMU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=AFyVXiVi; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="AFyVXiVi" Received: by smtp.kernel.org (Postfix) with ESMTPSA id B4181C4CEF4; Wed, 25 Jun 2025 07:10:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1750835428; bh=UROoErFHCYCaDPqAxDhc4AhSGoASaVjnImGlpaYhSWw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=AFyVXiVibPF18EO8gj6Jg8HFt9D0dAKEtmgBllL6vS9puXEkLps5gGp8IKQOMYoEU 5cIMp+c+4Lw+Md2qvcSaljt6xufjhViD2eHp77C7AyQs1BQx07SSUxJHjp1R0N/16N C9qn8pZxEt5b6NDQUrlVxIrbOHtSyYVwN3KGx7y0MOmYsCFQU2MsU02kMD0HmvZdME yIShdZk23UOa1raxW1EGKtqSRe/5QtI3T6rhPGDi9zx6HVJpqsBfLq3VyQx3ChC4Nn p2b5qocKN78Ox6UeR+WxFAOugP5RqbJOBsLqASFPiHO8G/b6w8RjDlMzNOJo72ptLm N8Lwk1i6vX/Nw== From: Eric Biggers To: linux-crypto@vger.kernel.org Cc: linux-kernel@vger.kernel.org, Ard Biesheuvel , "Jason A . Donenfeld" , linux-arm-kernel@lists.infradead.org, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, sparclinux@vger.kernel.org, x86@kernel.org, Eric Biggers Subject: [PATCH 02/18] cxl/test: Simplify fw_buf_checksum_show() Date: Wed, 25 Jun 2025 00:08:03 -0700 Message-ID: <20250625070819.1496119-3-ebiggers@kernel.org> X-Mailer: git-send-email 2.50.0 In-Reply-To: <20250625070819.1496119-1-ebiggers@kernel.org> References: <20250625070819.1496119-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" First, just use sha256() instead of a sequence of sha256_init(), sha256_update(), and sha256_final(). The result is the same. Second, use *phN instead of open-coding the conversion of bytes to hex. Signed-off-by: Eric Biggers --- tools/testing/cxl/test/mem.c | 21 ++------------------- 1 file changed, 2 insertions(+), 19 deletions(-) diff --git a/tools/testing/cxl/test/mem.c b/tools/testing/cxl/test/mem.c index 0f1d91f57ba34..d533481672b78 100644 --- a/tools/testing/cxl/test/mem.c +++ b/tools/testing/cxl/test/mem.c @@ -1826,31 +1826,14 @@ static DEVICE_ATTR_RW(security_lock); static ssize_t fw_buf_checksum_show(struct device *dev, struct device_attribute *attr, char *buf) { struct cxl_mockmem_data *mdata =3D dev_get_drvdata(dev); u8 hash[SHA256_DIGEST_SIZE]; - unsigned char *hstr, *hptr; - struct sha256_state sctx; - ssize_t written =3D 0; - int i; - - sha256_init(&sctx); - sha256_update(&sctx, mdata->fw, mdata->fw_size); - sha256_final(&sctx, hash); - - hstr =3D kzalloc((SHA256_DIGEST_SIZE * 2) + 1, GFP_KERNEL); - if (!hstr) - return -ENOMEM; - - hptr =3D hstr; - for (i =3D 0; i < SHA256_DIGEST_SIZE; i++) - hptr +=3D sprintf(hptr, "%02x", hash[i]); =20 - written =3D sysfs_emit(buf, "%s\n", hstr); + sha256(mdata->fw, mdata->fw_size, hash); =20 - kfree(hstr); - return written; + return sysfs_emit(buf, "%*phN\n", SHA256_DIGEST_SIZE, hash); } =20 static DEVICE_ATTR_RO(fw_buf_checksum); =20 static ssize_t sanitize_timeout_show(struct device *dev, --=20 2.50.0 From nobody Sun Sep 7 11:31:36 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E5A78202983; Wed, 25 Jun 2025 07:10:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750835429; cv=none; b=YjAorFMGnOi4nr0vqaSNhyTzHLNAHMdZ5KCuSzcE3A784Cyuc4M1Vb0S7i7zCS2GXywJ8LLK5htf8rhTcHar0tSpcWwgpaRtC6gXO+C4G/1TruJR5bEgNSFNb8LfTTdTabON3zOjP+eF271318kgVrkVYPxCqStTEgZNRo6esOU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750835429; c=relaxed/simple; bh=MVpW9jOYl5nfA5iSMc4oN4R/qeu/fhMbVZxtFi5zdFA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=ee7TXIwcXbcnI7xqSwFDZuOGIRSWj1jcZ5auTcVkC6VyIm8ansi8F+8bVuGhptBhxjb+VBpPjykiARv6+n563dEbekHH8DanxMQ49gxhTUw36BzWezEy+8flathvUsaGkts5HHjE6+dg+M9Y7bwwQVRT/5lYAPNCn2p1XqvKnwI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=eaWrFSUh; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="eaWrFSUh" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 35A21C4CEF8; Wed, 25 Jun 2025 07:10:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1750835428; bh=MVpW9jOYl5nfA5iSMc4oN4R/qeu/fhMbVZxtFi5zdFA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=eaWrFSUhcCI7riLUfFOXOKpU7RRqzyayHnNsr0WqvQSAzXanOJBDrb0E5xCtJtois 8LShoehuDb1UkTBdzvjnDGljGp3yrYnOf2GxQfhGzZJE6b/KqYJikZ8wgjHjdNfNPi mSTR1FAb33fdLl/btE0meQ09o5r4K6cPrIg9STvdmgXdU1T4r2tm+MQ235u9xHlGPe cdGa0cns8ZIr3XYa0cJBGr4rbTwGYLKfBa7QGwMZwnJsPNjVGxQI3of78cc5JJiyZm wU21Gged0J0nqc1T/laJis8r+d0qh5/kKhXdRXWILByu9uENEwQX2fTTLyncyq9hrE LX8wskdgDHsZg== From: Eric Biggers To: linux-crypto@vger.kernel.org Cc: linux-kernel@vger.kernel.org, Ard Biesheuvel , "Jason A . Donenfeld" , linux-arm-kernel@lists.infradead.org, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, sparclinux@vger.kernel.org, x86@kernel.org, Eric Biggers Subject: [PATCH 03/18] crypto: sha512 - Use the correct legacy export format Date: Wed, 25 Jun 2025 00:08:04 -0700 Message-ID: <20250625070819.1496119-4-ebiggers@kernel.org> X-Mailer: git-send-email 2.50.0 In-Reply-To: <20250625070819.1496119-1-ebiggers@kernel.org> References: <20250625070819.1496119-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" It appears the legacy export format is intended to have the value of the bytecount field be block-aligned, so update __crypto_sha512_export() and __crypto_sha512_import() to match. Fixes: e62c2fe56418 ("crypto: sha512 - Use same state format as legacy driv= ers") Signed-off-by: Eric Biggers --- crypto/sha512.c | 22 +++++++++++++++++----- 1 file changed, 17 insertions(+), 5 deletions(-) diff --git a/crypto/sha512.c b/crypto/sha512.c index 0eed801346018..5bda259dd22fe 100644 --- a/crypto/sha512.c +++ b/crypto/sha512.c @@ -13,30 +13,42 @@ #include =20 /* * Export and import functions. crypto_shash wants a particular format th= at * matches that used by some legacy drivers. It currently is the same as = the - * library SHA context but with the partial block length as a u8 appended = to it. + * library SHA context, except the value in bytecount_lo must be block-ali= gned + * and the remainder must be stored in an extra u8 appended to the struct. */ =20 #define SHA512_SHASH_STATE_SIZE 209 static_assert(offsetof(struct __sha512_ctx, state) =3D=3D 0); static_assert(offsetof(struct __sha512_ctx, bytecount_lo) =3D=3D 64); static_assert(offsetof(struct __sha512_ctx, bytecount_hi) =3D=3D 72); static_assert(offsetof(struct __sha512_ctx, buf) =3D=3D 80); static_assert(sizeof(struct __sha512_ctx) + 1 =3D=3D SHA512_SHASH_STATE_SI= ZE); =20 -static int __crypto_sha512_export(const struct __sha512_ctx *ctx, void *ou= t) +static int __crypto_sha512_export(const struct __sha512_ctx *ctx0, void *o= ut) { - memcpy(out, ctx, sizeof(*ctx)); - *((u8 *)out + sizeof(*ctx)) =3D ctx->bytecount_lo % SHA512_BLOCK_SIZE; + struct __sha512_ctx ctx =3D *ctx0; + unsigned int partial; + u8 *p =3D out; + + partial =3D ctx.bytecount_lo % SHA512_BLOCK_SIZE; + ctx.bytecount_lo -=3D partial; + memcpy(p, &ctx, sizeof(ctx)); + p +=3D sizeof(ctx); + *p =3D partial; return 0; } =20 static int __crypto_sha512_import(struct __sha512_ctx *ctx, const void *in) { - memcpy(ctx, in, sizeof(*ctx)); + const u8 *p =3D in; + + memcpy(ctx, p, sizeof(*ctx)); + p +=3D sizeof(*ctx); + ctx->bytecount_lo +=3D *p; return 0; } =20 /* SHA-384 */ =20 --=20 2.50.0 From nobody Sun Sep 7 11:31:36 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 38C392045AD; Wed, 25 Jun 2025 07:10:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750835429; cv=none; b=gL6Y3/Oypn5M2jisvq2g1yn98jme1ve76ANJp9x8qt3wQ0jHQetlZSp6nnvgkwtp/3+87zGAQiMwlIVZfsHPtltJQLJyUL4VvlMJ033r6dx5tXul2aNvWqX4TkfYQ4dPZZrUd9cdAtIDw+FuqbBUJ5bj5EhDQjki6sdpz3FdYc8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750835429; c=relaxed/simple; bh=wPEV3rkHpzYVStrK7S+qOEmN99dY6Jna9uLe/dZKuSY=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=mM1hfzfnc1k2ylqRB5uo6B/djca9c/mln645gb9hJNMz++NEiUuJiP9og3tBhAzTX5UDtIp7UF0eXCrVOHncME+c8LbwA+M3jJ8AS411Ul6oFhpML6uqpzQrlkZ6RrrNM0syVX7AdvQVsVl/JLzMiuk6b30lV/2CDnHV9BRVcSc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=bHR0hOQp; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="bHR0hOQp" Received: by smtp.kernel.org (Postfix) with ESMTPSA id ABB70C4CEF7; Wed, 25 Jun 2025 07:10:28 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1750835429; bh=wPEV3rkHpzYVStrK7S+qOEmN99dY6Jna9uLe/dZKuSY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=bHR0hOQpGCME4HCeGmCX904RXEPw/TVTjCShb++7vtpvuct9mHCrY3zmjEN3EOY1V oc+LTrVLqyi0cKlotcCmSMOB8FKkw8Dd3Fy/b+uePu/qdN6BNQBAULp46mdxxZtTj3 mdwTUDBLqYDpifbNeaaeQ5Y0kDbuqjpG5o21k3CGocmTX9Rt78RohLYMG8qkf7vOYp zusgIsPmXgmfMQwmrJPjaMjLTcMADjv8xcxlZYNroQzTFFD97pk5zBP3xV6LX1woWw Qh1iNmOM11c5oKDqBHtp99vPiRBuJPm7QlmTpXX//ENMsdxfQKSS7wycNxJQTFVEuC E+nYupYb1ivkQ== From: Eric Biggers To: linux-crypto@vger.kernel.org Cc: linux-kernel@vger.kernel.org, Ard Biesheuvel , "Jason A . Donenfeld" , linux-arm-kernel@lists.infradead.org, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, sparclinux@vger.kernel.org, x86@kernel.org, Eric Biggers Subject: [PATCH 04/18] lib/crypto: sha512: Reorder some code in sha512.c Date: Wed, 25 Jun 2025 00:08:05 -0700 Message-ID: <20250625070819.1496119-5-ebiggers@kernel.org> X-Mailer: git-send-email 2.50.0 In-Reply-To: <20250625070819.1496119-1-ebiggers@kernel.org> References: <20250625070819.1496119-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Put the IVs before the round constants, since the IVs are used first. Put __sha512_final() just above sha384_final() and sha512_final(), which are the functions that call it. No code changes other than reordering. Signed-off-by: Eric Biggers --- lib/crypto/sha512.c | 72 ++++++++++++++++++++++----------------------- 1 file changed, 36 insertions(+), 36 deletions(-) diff --git a/lib/crypto/sha512.c b/lib/crypto/sha512.c index e650e2c3317b1..fe9d98b9b7db9 100644 --- a/lib/crypto/sha512.c +++ b/lib/crypto/sha512.c @@ -14,10 +14,24 @@ #include #include #include #include =20 +static const struct sha512_block_state sha384_iv =3D { + .h =3D { + SHA384_H0, SHA384_H1, SHA384_H2, SHA384_H3, + SHA384_H4, SHA384_H5, SHA384_H6, SHA384_H7, + }, +}; + +static const struct sha512_block_state sha512_iv =3D { + .h =3D { + SHA512_H0, SHA512_H1, SHA512_H2, SHA512_H3, + SHA512_H4, SHA512_H5, SHA512_H6, SHA512_H7, + }, +}; + static const u64 sha512_K[80] =3D { 0x428a2f98d728ae22ULL, 0x7137449123ef65cdULL, 0xb5c0fbcfec4d3b2fULL, 0xe9b5dba58189dbbcULL, 0x3956c25bf348b538ULL, 0x59f111f1b605d019ULL, 0x923f82a4af194f9bULL, 0xab1c5ed5da6d8118ULL, 0xd807aa98a3030242ULL, 0x12835b0145706fbeULL, 0x243185be4ee4b28cULL, 0x550c7dc3d5ffb4e2ULL, @@ -44,24 +58,10 @@ static const u64 sha512_K[80] =3D { 0x28db77f523047d84ULL, 0x32caab7b40c72493ULL, 0x3c9ebe0a15c9bebcULL, 0x431d67c49c100d4cULL, 0x4cc5d4becb3e42b6ULL, 0x597f299cfc657e2aULL, 0x5fcb6fab3ad6faecULL, 0x6c44198c4a475817ULL, }; =20 -static const struct sha512_block_state sha384_iv =3D { - .h =3D { - SHA384_H0, SHA384_H1, SHA384_H2, SHA384_H3, - SHA384_H4, SHA384_H5, SHA384_H6, SHA384_H7, - }, -}; - -static const struct sha512_block_state sha512_iv =3D { - .h =3D { - SHA512_H0, SHA512_H1, SHA512_H2, SHA512_H3, - SHA512_H4, SHA512_H5, SHA512_H6, SHA512_H7, - }, -}; - #define Ch(x, y, z) ((z) ^ ((x) & ((y) ^ (z)))) #define Maj(x, y, z) (((x) & (y)) | ((z) & ((x) | (y)))) #define e0(x) (ror64((x), 28) ^ ror64((x), 34) ^ ror64((x), 39)) #define e1(x) (ror64((x), 14) ^ ror64((x), 18) ^ ror64((x), 41)) #define s0(x) (ror64((x), 1) ^ ror64((x), 8) ^ ((x) >> 7)) @@ -134,32 +134,10 @@ sha512_blocks_generic(struct sha512_block_state *stat= e, #include "sha512.h" /* $(SRCARCH)/sha512.h */ #else #define sha512_blocks sha512_blocks_generic #endif =20 -static void __sha512_final(struct __sha512_ctx *ctx, - u8 *out, size_t digest_size) -{ - u64 bitcount_hi =3D (ctx->bytecount_hi << 3) | (ctx->bytecount_lo >> 61); - u64 bitcount_lo =3D ctx->bytecount_lo << 3; - size_t partial =3D ctx->bytecount_lo % SHA512_BLOCK_SIZE; - - ctx->buf[partial++] =3D 0x80; - if (partial > SHA512_BLOCK_SIZE - 16) { - memset(&ctx->buf[partial], 0, SHA512_BLOCK_SIZE - partial); - sha512_blocks(&ctx->state, ctx->buf, 1); - partial =3D 0; - } - memset(&ctx->buf[partial], 0, SHA512_BLOCK_SIZE - 16 - partial); - *(__be64 *)&ctx->buf[SHA512_BLOCK_SIZE - 16] =3D cpu_to_be64(bitcount_hi); - *(__be64 *)&ctx->buf[SHA512_BLOCK_SIZE - 8] =3D cpu_to_be64(bitcount_lo); - sha512_blocks(&ctx->state, ctx->buf, 1); - - for (size_t i =3D 0; i < digest_size; i +=3D 8) - put_unaligned_be64(ctx->state.h[i / 8], out + i); -} - static void __sha512_init(struct __sha512_ctx *ctx, const struct sha512_block_state *iv, u64 initial_bytecount) { ctx->state =3D *iv; @@ -211,10 +189,32 @@ void __sha512_update(struct __sha512_ctx *ctx, const = u8 *data, size_t len) if (len) memcpy(&ctx->buf[partial], data, len); } EXPORT_SYMBOL_GPL(__sha512_update); =20 +static void __sha512_final(struct __sha512_ctx *ctx, + u8 *out, size_t digest_size) +{ + u64 bitcount_hi =3D (ctx->bytecount_hi << 3) | (ctx->bytecount_lo >> 61); + u64 bitcount_lo =3D ctx->bytecount_lo << 3; + size_t partial =3D ctx->bytecount_lo % SHA512_BLOCK_SIZE; + + ctx->buf[partial++] =3D 0x80; + if (partial > SHA512_BLOCK_SIZE - 16) { + memset(&ctx->buf[partial], 0, SHA512_BLOCK_SIZE - partial); + sha512_blocks(&ctx->state, ctx->buf, 1); + partial =3D 0; + } + memset(&ctx->buf[partial], 0, SHA512_BLOCK_SIZE - 16 - partial); + *(__be64 *)&ctx->buf[SHA512_BLOCK_SIZE - 16] =3D cpu_to_be64(bitcount_hi); + *(__be64 *)&ctx->buf[SHA512_BLOCK_SIZE - 8] =3D cpu_to_be64(bitcount_lo); + sha512_blocks(&ctx->state, ctx->buf, 1); + + for (size_t i =3D 0; i < digest_size; i +=3D 8) + put_unaligned_be64(ctx->state.h[i / 8], out + i); +} + void sha384_final(struct sha384_ctx *ctx, u8 out[SHA384_DIGEST_SIZE]) { __sha512_final(&ctx->ctx, out, SHA384_DIGEST_SIZE); memzero_explicit(ctx, sizeof(*ctx)); } --=20 2.50.0 From nobody Sun Sep 7 11:31:36 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E80AC20B218; Wed, 25 Jun 2025 07:10:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750835430; cv=none; b=tV5HNDWjEnkTzR1nrLL3d+sMX0cUeaHhVALIVfXK2pUPfbE2siL5knoLQnU4aRCsxmInmAcmPpFuIXuaA9HbKFi7Wf33ztPw+oPVyfPvgMFewVL4tj6x3e73JESYyWT5Kw6cS6yP/4+nMouHkerrfE6X1UNeEmmYKH3BjrHW8H8= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750835430; c=relaxed/simple; bh=1DNk7tX1YvvlA6uSWHlfTS+Bb3s1eS+aWteQQPlb1JM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=hu7cDyXjRdIL8ocCQJpPMJUlJPgMvLHpC5QZp/qdvnH4Dzt/j/Pi1YJHiMqGZdAwfJxgglVk1hX+PPUtye2QvZooxU5XWKuzAxtslpORe2p8Jy/Omcc9TfqKwpSQMtkMe3hM3fBK2Uq+jaTd0rqaj5l2ylHUweKYGuhxENdkAQo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=nqVpQzpx; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="nqVpQzpx" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 303BCC4CEF4; Wed, 25 Jun 2025 07:10:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1750835429; bh=1DNk7tX1YvvlA6uSWHlfTS+Bb3s1eS+aWteQQPlb1JM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=nqVpQzpxUnKXPMGrZ1EGYozacdniZ2Ei1z0BDb9CArbXCIdhByuU9MICPVrjgSPOs 7I9XtCbuuhB6jWShxX4zi3pBcGSRDxsestEYjY/AaCTGarh3AmLmMYGVmsNHH1evss mVm19s4IODFj3zJjHA7UoVak1Yktt1eeG3x4UvGqP1eBFy9no7S/Z2ct6nwLqYlGJV Hfjb4JOyB1ZwjnDHOmsYhRJU6IML4Cg4wPn/XWbASQCigU+IWieRPJoRfKCL2onHLw c93er26kDV2ZMVmMh48fmsPNOWrh8d0egdKTyR/kurxPY6+tKSegQkxGRUJK1Iqp4n jcy1kRYUqSgXw== From: Eric Biggers To: linux-crypto@vger.kernel.org Cc: linux-kernel@vger.kernel.org, Ard Biesheuvel , "Jason A . Donenfeld" , linux-arm-kernel@lists.infradead.org, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, sparclinux@vger.kernel.org, x86@kernel.org, Eric Biggers Subject: [PATCH 05/18] lib/crypto: sha512: Do not include Date: Wed, 25 Jun 2025 00:08:06 -0700 Message-ID: <20250625070819.1496119-6-ebiggers@kernel.org> X-Mailer: git-send-email 2.50.0 In-Reply-To: <20250625070819.1496119-1-ebiggers@kernel.org> References: <20250625070819.1496119-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Since the SHA-512 code is now consolidated into a single translation unit except for assembly code, there is no longer any need for an internal header. Indeed, lib/crypto/sha512.c relies on only for indirect inclusions. Stop including it. This prepares for the later removal of this header, once the SHA-256 code is reorganized similarly and stops needing it too. Signed-off-by: Eric Biggers --- lib/crypto/sha512.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/lib/crypto/sha512.c b/lib/crypto/sha512.c index fe9d98b9b7db9..f5a9569a7ef96 100644 --- a/lib/crypto/sha512.c +++ b/lib/crypto/sha512.c @@ -7,15 +7,16 @@ * Copyright (c) 2003 Kyle McMartin * Copyright 2025 Google LLC */ =20 #include -#include +#include #include #include #include #include +#include #include =20 static const struct sha512_block_state sha384_iv =3D { .h =3D { SHA384_H0, SHA384_H1, SHA384_H2, SHA384_H3, --=20 2.50.0 From nobody Sun Sep 7 11:31:36 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 69E4620CCE5; Wed, 25 Jun 2025 07:10:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750835430; cv=none; b=aPxP+lLap/KvUyJ2zHybh/GMCNdYGURvCwApmUFgU0SQ5NjeV7q3NvK9Yoj0JancETdt0AxZiwGGkJvVMFz0mVGTfRg9C0JjD+ym5Fh4JD6s4Smq9JZdCY6CvdpCV/EKKMu46ZnNyBaKeDn9vli2ggNfV347GRadCZ9b8TzCWzw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750835430; c=relaxed/simple; bh=2LrGs37CGEadHzUgE3udzC7iJiB7yoZqC5JHegQK+WQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=kz6He9LX6aTZNsMv5oEz9Vp6/1bJK2X8x7WOI2pC/5wT8a1L2zaIIVHI6wEalHByoDmob3wJzDKScL9p28fzshM1Ff4gHmkTZznRofVVtBb2zDpO0oWdUYwmnACmT4y8n0vZegZBSQkDJkjMKSSEhGPAaANR1CX1N2ouPNzL25s= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=UyXquyVZ; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="UyXquyVZ" Received: by smtp.kernel.org (Postfix) with ESMTPSA id A66DCC4CEF2; Wed, 25 Jun 2025 07:10:29 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1750835430; bh=2LrGs37CGEadHzUgE3udzC7iJiB7yoZqC5JHegQK+WQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=UyXquyVZWgisY6zFwDG5d4ZzxprCHqDlisQ41lEvQ5IHurjnN/m8NqcWJG2vqieps QNFaI0iSMcmJ46Ug61uPBEtuUygIKDzSDjAxynb4e32/Bzo9ew8IRX1nVeNa3fkzqN syi5JxOKQKXDORFsRNuQUSvE2MTtpLQan5nDKmPDEZTdAAdaOORBOzhXfhVgW++ZhS 8aQcYq36vJoEcgEO11QoLWRlxYpBu5iW/xpJLuTKV0vQsT609fAI+9N0MKxz1aEcLo feCNVsiIJhz/3Gd10BL2tMvEMtB3M5eyiDmrObThqsEaq+JwDKnQHAVQO+KU2HI6MX nTvdfxLoh5QYA== From: Eric Biggers To: linux-crypto@vger.kernel.org Cc: linux-kernel@vger.kernel.org, Ard Biesheuvel , "Jason A . Donenfeld" , linux-arm-kernel@lists.infradead.org, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, sparclinux@vger.kernel.org, x86@kernel.org, Eric Biggers Subject: [PATCH 06/18] lib/crypto: sha512: Fix a grammatical error in kerneldoc comments Date: Wed, 25 Jun 2025 00:08:07 -0700 Message-ID: <20250625070819.1496119-7-ebiggers@kernel.org> X-Mailer: git-send-email 2.50.0 In-Reply-To: <20250625070819.1496119-1-ebiggers@kernel.org> References: <20250625070819.1496119-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" "An HMAC", not "A HMAC". Signed-off-by: Eric Biggers --- include/crypto/sha2.h | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/include/crypto/sha2.h b/include/crypto/sha2.h index 36a9dab805be7..296ce9d468bfc 100644 --- a/include/crypto/sha2.h +++ b/include/crypto/sha2.h @@ -247,11 +247,11 @@ struct hmac_sha384_ctx { */ void hmac_sha384_preparekey(struct hmac_sha384_key *key, const u8 *raw_key, size_t raw_key_len); =20 /** - * hmac_sha384_init() - Initialize a HMAC-SHA384 context for a new message + * hmac_sha384_init() - Initialize an HMAC-SHA384 context for a new message * @ctx: (output) the HMAC context to initialize * @key: the prepared HMAC key * * If you don't need incremental computation, consider hmac_sha384() inste= ad. * @@ -262,11 +262,11 @@ static inline void hmac_sha384_init(struct hmac_sha38= 4_ctx *ctx, { __hmac_sha512_init(&ctx->ctx, &key->key); } =20 /** - * hmac_sha384_update() - Update a HMAC-SHA384 context with message data + * hmac_sha384_update() - Update an HMAC-SHA384 context with message data * @ctx: the HMAC context to update; must have been initialized * @data: the message data * @data_len: the data length in bytes * * This can be called any number of times. @@ -278,11 +278,11 @@ static inline void hmac_sha384_update(struct hmac_sha= 384_ctx *ctx, { __sha512_update(&ctx->ctx.sha_ctx, data, data_len); } =20 /** - * hmac_sha384_final() - Finish computing a HMAC-SHA384 value + * hmac_sha384_final() - Finish computing an HMAC-SHA384 value * @ctx: the HMAC context to finalize; must have been initialized * @out: (output) the resulting HMAC-SHA384 value * * After finishing, this zeroizes @ctx. So the caller does not need to do= it. * @@ -405,11 +405,11 @@ struct hmac_sha512_ctx { */ void hmac_sha512_preparekey(struct hmac_sha512_key *key, const u8 *raw_key, size_t raw_key_len); =20 /** - * hmac_sha512_init() - Initialize a HMAC-SHA512 context for a new message + * hmac_sha512_init() - Initialize an HMAC-SHA512 context for a new message * @ctx: (output) the HMAC context to initialize * @key: the prepared HMAC key * * If you don't need incremental computation, consider hmac_sha512() inste= ad. * @@ -420,11 +420,11 @@ static inline void hmac_sha512_init(struct hmac_sha51= 2_ctx *ctx, { __hmac_sha512_init(&ctx->ctx, &key->key); } =20 /** - * hmac_sha512_update() - Update a HMAC-SHA512 context with message data + * hmac_sha512_update() - Update an HMAC-SHA512 context with message data * @ctx: the HMAC context to update; must have been initialized * @data: the message data * @data_len: the data length in bytes * * This can be called any number of times. @@ -436,11 +436,11 @@ static inline void hmac_sha512_update(struct hmac_sha= 512_ctx *ctx, { __sha512_update(&ctx->ctx.sha_ctx, data, data_len); } =20 /** - * hmac_sha512_final() - Finish computing a HMAC-SHA512 value + * hmac_sha512_final() - Finish computing an HMAC-SHA512 value * @ctx: the HMAC context to finalize; must have been initialized * @out: (output) the resulting HMAC-SHA512 value * * After finishing, this zeroizes @ctx. So the caller does not need to do= it. * --=20 2.50.0 From nobody Sun Sep 7 11:31:36 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2A9DA211A15; Wed, 25 Jun 2025 07:10:30 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750835431; cv=none; b=td85UVC7mI0LPoXK6qIPCJTUFJpX4ni6KNFboliLk6qziDvtqnsLhOKsVWDCZInSQkaB7+mDLmPC3Sw9TriHoQ0NQqcBFCqLdilMUs5i7T8tXT67Azg2/+71U9lcb9kau2FYIRjMRKyvnEYA0mX/DOEEhk3lumIhhwVxjplFQYM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750835431; c=relaxed/simple; bh=5WUT3FVFtGYY38nGQk9FGEw/eHR0uyP40u2JgCeMrws=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=RuFqgDcknZ0JXgf96Bek19fUhCOWGeGPdgFMrIzewROQgDwKSdrfeZoqgE/8hFEHU1NwAaWCoj5gsFZK5Do7p5BkeYtYcT85/alwLfr+DuvejqozfYT7G/vDSIp8E3iVT+UORphz0uQLKVoaIBLxkUjtaq4z1Dgd+UBRZoJ9D1I= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=Quc/P6eu; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="Quc/P6eu" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2A9AEC4CEF5; Wed, 25 Jun 2025 07:10:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1750835430; bh=5WUT3FVFtGYY38nGQk9FGEw/eHR0uyP40u2JgCeMrws=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Quc/P6eu5iIScTCwgcuUNaTCGoZUIh6fp/hl7LHmviUQEmRe+t4LKdFCBH7navrfo GICzrHziIKzqzyFPphpHP3vo3HEX/7j5OI/vmmZ+XLVowZee9Wyi8jJQ5wlh2uunHE h3rFtEujKoiwh+eJKW6dE8GGHzKaTRDnO/rC1+Mcjz1eQ1WFW6US8zZ3MJsqT4zaqV iK7S6x93chir8amN7aJinaXtXZEHsjWX0ZQ82+Yq2sYG/Uz8xB5Bh26wshdB7cCu1j MWbfnItzcqqAhjD1xmylW17qmZH9qmFLwgW6njpigcrHg1Ehhg3WocNaQuohNf4WMK H645fNotiWbsg== From: Eric Biggers To: linux-crypto@vger.kernel.org Cc: linux-kernel@vger.kernel.org, Ard Biesheuvel , "Jason A . Donenfeld" , linux-arm-kernel@lists.infradead.org, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, sparclinux@vger.kernel.org, x86@kernel.org, Eric Biggers Subject: [PATCH 07/18] lib/crypto: sha256: Reorder some code Date: Wed, 25 Jun 2025 00:08:08 -0700 Message-ID: <20250625070819.1496119-8-ebiggers@kernel.org> X-Mailer: git-send-email 2.50.0 In-Reply-To: <20250625070819.1496119-1-ebiggers@kernel.org> References: <20250625070819.1496119-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" First, move the declarations of sha224_init/update/final to be just above the corresponding SHA-256 code, matching the order that I used for SHA-384 and SHA-512. In sha2.h, the end result is that SHA-224, SHA-256, SHA-384, and SHA-512 are all in the logical order. Second, move sha224_block_init() and sha256_block_init() to be just below crypto_sha256_state. In later changes, these functions as well as struct crypto_sha256_state will no longer be used by the library functions. They'll remain just for some legacy offload drivers. This gets them into a logical place in the file for that. No code changes other than reordering. Signed-off-by: Eric Biggers --- include/crypto/sha2.h | 60 +++++++++++++++++++++---------------------- lib/crypto/sha256.c | 12 ++++----- 2 files changed, 36 insertions(+), 36 deletions(-) diff --git a/include/crypto/sha2.h b/include/crypto/sha2.h index 296ce9d468bfc..bb181b7996cdc 100644 --- a/include/crypto/sha2.h +++ b/include/crypto/sha2.h @@ -69,10 +69,36 @@ extern const u8 sha512_zero_message_hash[SHA512_DIGEST_= SIZE]; struct crypto_sha256_state { u32 state[SHA256_STATE_WORDS]; u64 count; }; =20 +static inline void sha224_block_init(struct crypto_sha256_state *sctx) +{ + sctx->state[0] =3D SHA224_H0; + sctx->state[1] =3D SHA224_H1; + sctx->state[2] =3D SHA224_H2; + sctx->state[3] =3D SHA224_H3; + sctx->state[4] =3D SHA224_H4; + sctx->state[5] =3D SHA224_H5; + sctx->state[6] =3D SHA224_H6; + sctx->state[7] =3D SHA224_H7; + sctx->count =3D 0; +} + +static inline void sha256_block_init(struct crypto_sha256_state *sctx) +{ + sctx->state[0] =3D SHA256_H0; + sctx->state[1] =3D SHA256_H1; + sctx->state[2] =3D SHA256_H2; + sctx->state[3] =3D SHA256_H3; + sctx->state[4] =3D SHA256_H4; + sctx->state[5] =3D SHA256_H5; + sctx->state[6] =3D SHA256_H6; + sctx->state[7] =3D SHA256_H7; + sctx->count =3D 0; +} + struct sha256_state { union { struct crypto_sha256_state ctx; struct { u32 state[SHA256_STATE_WORDS]; @@ -86,51 +112,25 @@ struct sha512_state { u64 state[SHA512_DIGEST_SIZE / 8]; u64 count[2]; u8 buf[SHA512_BLOCK_SIZE]; }; =20 -static inline void sha256_block_init(struct crypto_sha256_state *sctx) +static inline void sha224_init(struct sha256_state *sctx) { - sctx->state[0] =3D SHA256_H0; - sctx->state[1] =3D SHA256_H1; - sctx->state[2] =3D SHA256_H2; - sctx->state[3] =3D SHA256_H3; - sctx->state[4] =3D SHA256_H4; - sctx->state[5] =3D SHA256_H5; - sctx->state[6] =3D SHA256_H6; - sctx->state[7] =3D SHA256_H7; - sctx->count =3D 0; + sha224_block_init(&sctx->ctx); } +/* Simply use sha256_update as it is equivalent to sha224_update. */ +void sha224_final(struct sha256_state *sctx, u8 out[SHA224_DIGEST_SIZE]); =20 static inline void sha256_init(struct sha256_state *sctx) { sha256_block_init(&sctx->ctx); } void sha256_update(struct sha256_state *sctx, const u8 *data, size_t len); void sha256_final(struct sha256_state *sctx, u8 out[SHA256_DIGEST_SIZE]); void sha256(const u8 *data, size_t len, u8 out[SHA256_DIGEST_SIZE]); =20 -static inline void sha224_block_init(struct crypto_sha256_state *sctx) -{ - sctx->state[0] =3D SHA224_H0; - sctx->state[1] =3D SHA224_H1; - sctx->state[2] =3D SHA224_H2; - sctx->state[3] =3D SHA224_H3; - sctx->state[4] =3D SHA224_H4; - sctx->state[5] =3D SHA224_H5; - sctx->state[6] =3D SHA224_H6; - sctx->state[7] =3D SHA224_H7; - sctx->count =3D 0; -} - -static inline void sha224_init(struct sha256_state *sctx) -{ - sha224_block_init(&sctx->ctx); -} -/* Simply use sha256_update as it is equivalent to sha224_update. */ -void sha224_final(struct sha256_state *sctx, u8 out[SHA224_DIGEST_SIZE]); - /* State for the SHA-512 (and SHA-384) compression function */ struct sha512_block_state { u64 h[8]; }; =20 diff --git a/lib/crypto/sha256.c b/lib/crypto/sha256.c index 6bfa4ae8dfb59..573ccecbf48bf 100644 --- a/lib/crypto/sha256.c +++ b/lib/crypto/sha256.c @@ -56,22 +56,22 @@ static inline void __sha256_final(struct sha256_state *= sctx, u8 *out, sha256_finup(&sctx->ctx, sctx->buf, partial, out, digest_size, sha256_purgatory(), false); memzero_explicit(sctx, sizeof(*sctx)); } =20 -void sha256_final(struct sha256_state *sctx, u8 out[SHA256_DIGEST_SIZE]) -{ - __sha256_final(sctx, out, SHA256_DIGEST_SIZE); -} -EXPORT_SYMBOL(sha256_final); - void sha224_final(struct sha256_state *sctx, u8 out[SHA224_DIGEST_SIZE]) { __sha256_final(sctx, out, SHA224_DIGEST_SIZE); } EXPORT_SYMBOL(sha224_final); =20 +void sha256_final(struct sha256_state *sctx, u8 out[SHA256_DIGEST_SIZE]) +{ + __sha256_final(sctx, out, SHA256_DIGEST_SIZE); +} +EXPORT_SYMBOL(sha256_final); + void sha256(const u8 *data, size_t len, u8 out[SHA256_DIGEST_SIZE]) { struct sha256_state sctx; =20 sha256_init(&sctx); --=20 2.50.0 From nobody Sun Sep 7 11:31:36 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 2BD55211A19; Wed, 25 Jun 2025 07:10:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750835431; cv=none; b=uh4XP4Z7znKdVhoNX2td7KVWWJrxc1eOG29JNmbEu4P9ITdyyJLlgcrodUwrjoMoTzTRcLlxrTYmuuiNJkiaosg5j8eyQ1EBlydxRGin3keMqICze4syX7+3Lq4RU/NbzM9Cg1WnqVkNWE0fEgGrouzqi/zpBKDa27EbNc1lRII= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750835431; c=relaxed/simple; bh=vTVMfDTYNkNQFPT/9iQj0+X11NXKarZPeSOLstOuCFU=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=oNfMW0VyAy67UPw0BQNyjCGDByCDStx9H4MFaTetm36pW48FaL7QvAyd/iyREGY9eQuLBGhoIbRPgJI9+xijzVfw33QcoqUfcjvdj+8BndUrhiG7rVrLjnDMaNf4RCM+zTwfC+VLUk7WA3bZ6xSTYks1ZRalgwPklVWyiBF1nA0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=ZbThUQxW; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="ZbThUQxW" Received: by smtp.kernel.org (Postfix) with ESMTPSA id A0E01C4CEF2; Wed, 25 Jun 2025 07:10:30 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1750835431; bh=vTVMfDTYNkNQFPT/9iQj0+X11NXKarZPeSOLstOuCFU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ZbThUQxWX+tP9dphO56TG4jAAtKgSlGUWuwSAq9wn0XlYMM9yHLuNj/FSZ8j7t8hD D7q6NQ89uKDlBzT722qAMlWfh3XKX5YNv7VMTWK45OEMYbYc2biKpXB4WSor2ElUp0 8SUa29oY18Q+Jypz8M3Uak48S6PzqS8CuWNg4J75ro+/uhFb9xFv4Iz9Q7zTOF9cJX /TIMM7K1rNqFangg9wGedjRiEbUfdeviV7Ltay3kHaoJvr7NrKfxywe9j3fWtVfwmB 9MbeOeiFhOnOH8XhjxwRNwm31J5uaTpqyNx7OFhZh2uaLaO5lEtl6pGLwWrtgPz+Rh jZEXKtgrFDJFw== From: Eric Biggers To: linux-crypto@vger.kernel.org Cc: linux-kernel@vger.kernel.org, Ard Biesheuvel , "Jason A . Donenfeld" , linux-arm-kernel@lists.infradead.org, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, sparclinux@vger.kernel.org, x86@kernel.org, Eric Biggers Subject: [PATCH 08/18] lib/crypto: sha256: Remove sha256_blocks_simd() Date: Wed, 25 Jun 2025 00:08:09 -0700 Message-ID: <20250625070819.1496119-9-ebiggers@kernel.org> X-Mailer: git-send-email 2.50.0 In-Reply-To: <20250625070819.1496119-1-ebiggers@kernel.org> References: <20250625070819.1496119-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Instead of having both sha256_blocks_arch() and sha256_blocks_simd(), instead have just sha256_blocks_arch() which uses the most efficient implementation that is available in the calling context. This is simpler, as it reduces the API surface. It's also safer, since sha256_blocks_arch() just works in all contexts, including contexts where the FPU/SIMD/vector registers cannot be used. This doesn't mean that SHA-256 computations *should* be done in such contexts, but rather we should just do the right thing instead of corrupting a random task's registers. Eliminating this footgun and simplifying the code is well worth the very small performance cost of doing the check. Signed-off-by: Eric Biggers --- include/crypto/internal/sha2.h | 6 ------ lib/crypto/Kconfig | 8 -------- lib/crypto/arm/Kconfig | 1 - lib/crypto/arm/sha256-armv4.pl | 20 ++++++++++---------- lib/crypto/arm/sha256.c | 14 +++++++------- lib/crypto/arm64/Kconfig | 1 - lib/crypto/arm64/sha2-armv8.pl | 2 +- lib/crypto/arm64/sha256.c | 14 +++++++------- lib/crypto/arm64/sha512.h | 6 +++--- lib/crypto/riscv/Kconfig | 1 - lib/crypto/riscv/sha256.c | 12 +++--------- lib/crypto/x86/Kconfig | 1 - lib/crypto/x86/sha256.c | 12 +++--------- 13 files changed, 34 insertions(+), 64 deletions(-) diff --git a/include/crypto/internal/sha2.h b/include/crypto/internal/sha2.h index b9bccd3ff57fc..79be22381ef86 100644 --- a/include/crypto/internal/sha2.h +++ b/include/crypto/internal/sha2.h @@ -1,11 +1,10 @@ /* SPDX-License-Identifier: GPL-2.0-only */ =20 #ifndef _CRYPTO_INTERNAL_SHA2_H #define _CRYPTO_INTERNAL_SHA2_H =20 -#include #include #include #include #include #include @@ -20,22 +19,17 @@ static inline bool sha256_is_arch_optimized(void) #endif void sha256_blocks_generic(u32 state[SHA256_STATE_WORDS], const u8 *data, size_t nblocks); void sha256_blocks_arch(u32 state[SHA256_STATE_WORDS], const u8 *data, size_t nblocks); -void sha256_blocks_simd(u32 state[SHA256_STATE_WORDS], - const u8 *data, size_t nblocks); =20 static inline void sha256_choose_blocks( u32 state[SHA256_STATE_WORDS], const u8 *data, size_t nblocks, bool force_generic, bool force_simd) { if (!IS_ENABLED(CONFIG_CRYPTO_ARCH_HAVE_LIB_SHA256) || force_generic) sha256_blocks_generic(state, data, nblocks); - else if (IS_ENABLED(CONFIG_CRYPTO_ARCH_HAVE_LIB_SHA256_SIMD) && - (force_simd || crypto_simd_usable())) - sha256_blocks_simd(state, data, nblocks); else sha256_blocks_arch(state, data, nblocks); } =20 static __always_inline void sha256_finup( diff --git a/lib/crypto/Kconfig b/lib/crypto/Kconfig index be2d335129401..efc91300ab865 100644 --- a/lib/crypto/Kconfig +++ b/lib/crypto/Kconfig @@ -150,18 +150,10 @@ config CRYPTO_ARCH_HAVE_LIB_SHA256 bool help Declares whether the architecture provides an arch-specific accelerated implementation of the SHA-256 library interface. =20 -config CRYPTO_ARCH_HAVE_LIB_SHA256_SIMD - bool - help - Declares whether the architecture provides an arch-specific - accelerated implementation of the SHA-256 library interface - that is SIMD-based and therefore not usable in hardirq - context. - config CRYPTO_LIB_SHA256_GENERIC tristate default CRYPTO_LIB_SHA256 if !CRYPTO_ARCH_HAVE_LIB_SHA256 help This symbol can be selected by arch implementations of the SHA-256 diff --git a/lib/crypto/arm/Kconfig b/lib/crypto/arm/Kconfig index d1ad664f0c674..9f3ff30f40328 100644 --- a/lib/crypto/arm/Kconfig +++ b/lib/crypto/arm/Kconfig @@ -26,6 +26,5 @@ config CRYPTO_POLY1305_ARM config CRYPTO_SHA256_ARM tristate depends on !CPU_V7M default CRYPTO_LIB_SHA256 select CRYPTO_ARCH_HAVE_LIB_SHA256 - select CRYPTO_ARCH_HAVE_LIB_SHA256_SIMD diff --git a/lib/crypto/arm/sha256-armv4.pl b/lib/crypto/arm/sha256-armv4.pl index 8122db7fd5990..f3a2b54efd4ee 100644 --- a/lib/crypto/arm/sha256-armv4.pl +++ b/lib/crypto/arm/sha256-armv4.pl @@ -202,22 +202,22 @@ K256: .word 0x90befffa,0xa4506ceb,0xbef9a3f7,0xc67178f2 .size K256,.-K256 .word 0 @ terminator #if __ARM_MAX_ARCH__>=3D7 && !defined(__KERNEL__) .LOPENSSL_armcap: -.word OPENSSL_armcap_P-sha256_blocks_arch +.word OPENSSL_armcap_P-sha256_block_data_order #endif .align 5 =20 -.global sha256_blocks_arch -.type sha256_blocks_arch,%function -sha256_blocks_arch: -.Lsha256_blocks_arch: +.global sha256_block_data_order +.type sha256_block_data_order,%function +sha256_block_data_order: +.Lsha256_block_data_order: #if __ARM_ARCH__<7 - sub r3,pc,#8 @ sha256_blocks_arch + sub r3,pc,#8 @ sha256_block_data_order #else - adr r3,.Lsha256_blocks_arch + adr r3,.Lsha256_block_data_order #endif #if __ARM_MAX_ARCH__>=3D7 && !defined(__KERNEL__) ldr r12,.LOPENSSL_armcap ldr r12,[r3,r12] @ OPENSSL_armcap_P tst r12,#ARMV8_SHA256 @@ -280,11 +280,11 @@ $code.=3D<<___; ldmia sp!,{r4-r11,lr} tst lr,#1 moveq pc,lr @ be binary compatible with V4, yet bx lr @ interoperable with Thumb ISA:-) #endif -.size sha256_blocks_arch,.-sha256_blocks_arch +.size sha256_block_data_order,.-sha256_block_data_order ___ ###################################################################### # NEON stuff # {{{ @@ -468,12 +468,12 @@ $code.=3D<<___; sha256_block_data_order_neon: .LNEON: stmdb sp!,{r4-r12,lr} =20 sub $H,sp,#16*4+16 - adr $Ktbl,.Lsha256_blocks_arch - sub $Ktbl,$Ktbl,#.Lsha256_blocks_arch-K256 + adr $Ktbl,.Lsha256_block_data_order + sub $Ktbl,$Ktbl,#.Lsha256_block_data_order-K256 bic $H,$H,#15 @ align for 128-bit stores mov $t2,sp mov sp,$H @ alloca add $len,$inp,$len,lsl#6 @ len to point at the end of inp =20 diff --git a/lib/crypto/arm/sha256.c b/lib/crypto/arm/sha256.c index 109192e54b0f0..2c9cfdaaa0691 100644 --- a/lib/crypto/arm/sha256.c +++ b/lib/crypto/arm/sha256.c @@ -4,40 +4,40 @@ * * Copyright 2025 Google LLC */ #include #include +#include #include #include =20 -asmlinkage void sha256_blocks_arch(u32 state[SHA256_STATE_WORDS], - const u8 *data, size_t nblocks); -EXPORT_SYMBOL_GPL(sha256_blocks_arch); +asmlinkage void sha256_block_data_order(u32 state[SHA256_STATE_WORDS], + const u8 *data, size_t nblocks); asmlinkage void sha256_block_data_order_neon(u32 state[SHA256_STATE_WORDS], const u8 *data, size_t nblocks); asmlinkage void sha256_ce_transform(u32 state[SHA256_STATE_WORDS], const u8 *data, size_t nblocks); =20 static __ro_after_init DEFINE_STATIC_KEY_FALSE(have_neon); static __ro_after_init DEFINE_STATIC_KEY_FALSE(have_ce); =20 -void sha256_blocks_simd(u32 state[SHA256_STATE_WORDS], +void sha256_blocks_arch(u32 state[SHA256_STATE_WORDS], const u8 *data, size_t nblocks) { if (IS_ENABLED(CONFIG_KERNEL_MODE_NEON) && - static_branch_likely(&have_neon)) { + static_branch_likely(&have_neon) && crypto_simd_usable()) { kernel_neon_begin(); if (static_branch_likely(&have_ce)) sha256_ce_transform(state, data, nblocks); else sha256_block_data_order_neon(state, data, nblocks); kernel_neon_end(); } else { - sha256_blocks_arch(state, data, nblocks); + sha256_block_data_order(state, data, nblocks); } } -EXPORT_SYMBOL_GPL(sha256_blocks_simd); +EXPORT_SYMBOL_GPL(sha256_blocks_arch); =20 bool sha256_is_arch_optimized(void) { /* We always can use at least the ARM scalar implementation. */ return true; diff --git a/lib/crypto/arm64/Kconfig b/lib/crypto/arm64/Kconfig index 129a7685cb4c1..49e57bfdb5b52 100644 --- a/lib/crypto/arm64/Kconfig +++ b/lib/crypto/arm64/Kconfig @@ -15,6 +15,5 @@ config CRYPTO_POLY1305_NEON =20 config CRYPTO_SHA256_ARM64 tristate default CRYPTO_LIB_SHA256 select CRYPTO_ARCH_HAVE_LIB_SHA256 - select CRYPTO_ARCH_HAVE_LIB_SHA256_SIMD diff --git a/lib/crypto/arm64/sha2-armv8.pl b/lib/crypto/arm64/sha2-armv8.pl index 4aebd20c498bc..35ec9ae99fe16 100644 --- a/lib/crypto/arm64/sha2-armv8.pl +++ b/lib/crypto/arm64/sha2-armv8.pl @@ -93,11 +93,11 @@ if ($output =3D~ /512/) { @sigma1=3D(17,19,10); $rounds=3D64; $reg_t=3D"w"; } =20 -$func=3D"sha${BITS}_blocks_arch"; +$func=3D"sha${BITS}_block_data_order"; =20 ($ctx,$inp,$num,$Ktbl)=3Dmap("x$_",(0..2,30)); =20 @X=3Dmap("$reg_t$_",(3..15,0..2)); @V=3D($A,$B,$C,$D,$E,$F,$G,$H)=3Dmap("$reg_t$_",(20..27)); diff --git a/lib/crypto/arm64/sha256.c b/lib/crypto/arm64/sha256.c index bcf7a3adc0c46..fb9bff40357be 100644 --- a/lib/crypto/arm64/sha256.c +++ b/lib/crypto/arm64/sha256.c @@ -4,29 +4,29 @@ * * Copyright 2025 Google LLC */ #include #include +#include #include #include =20 -asmlinkage void sha256_blocks_arch(u32 state[SHA256_STATE_WORDS], - const u8 *data, size_t nblocks); -EXPORT_SYMBOL_GPL(sha256_blocks_arch); +asmlinkage void sha256_block_data_order(u32 state[SHA256_STATE_WORDS], + const u8 *data, size_t nblocks); asmlinkage void sha256_block_neon(u32 state[SHA256_STATE_WORDS], const u8 *data, size_t nblocks); asmlinkage size_t __sha256_ce_transform(u32 state[SHA256_STATE_WORDS], const u8 *data, size_t nblocks); =20 static __ro_after_init DEFINE_STATIC_KEY_FALSE(have_neon); static __ro_after_init DEFINE_STATIC_KEY_FALSE(have_ce); =20 -void sha256_blocks_simd(u32 state[SHA256_STATE_WORDS], +void sha256_blocks_arch(u32 state[SHA256_STATE_WORDS], const u8 *data, size_t nblocks) { if (IS_ENABLED(CONFIG_KERNEL_MODE_NEON) && - static_branch_likely(&have_neon)) { + static_branch_likely(&have_neon) && crypto_simd_usable()) { if (static_branch_likely(&have_ce)) { do { size_t rem; =20 kernel_neon_begin(); @@ -40,14 +40,14 @@ void sha256_blocks_simd(u32 state[SHA256_STATE_WORDS], kernel_neon_begin(); sha256_block_neon(state, data, nblocks); kernel_neon_end(); } } else { - sha256_blocks_arch(state, data, nblocks); + sha256_block_data_order(state, data, nblocks); } } -EXPORT_SYMBOL_GPL(sha256_blocks_simd); +EXPORT_SYMBOL_GPL(sha256_blocks_arch); =20 bool sha256_is_arch_optimized(void) { /* We always can use at least the ARM64 scalar implementation. */ return true; diff --git a/lib/crypto/arm64/sha512.h b/lib/crypto/arm64/sha512.h index eae14f9752e0b..6abb40b467f2e 100644 --- a/lib/crypto/arm64/sha512.h +++ b/lib/crypto/arm64/sha512.h @@ -9,12 +9,12 @@ #include #include =20 static __ro_after_init DEFINE_STATIC_KEY_FALSE(have_sha512_insns); =20 -asmlinkage void sha512_blocks_arch(struct sha512_block_state *state, - const u8 *data, size_t nblocks); +asmlinkage void sha512_block_data_order(struct sha512_block_state *state, + const u8 *data, size_t nblocks); asmlinkage size_t __sha512_ce_transform(struct sha512_block_state *state, const u8 *data, size_t nblocks); =20 static void sha512_blocks(struct sha512_block_state *state, const u8 *data, size_t nblocks) @@ -30,11 +30,11 @@ static void sha512_blocks(struct sha512_block_state *st= ate, kernel_neon_end(); data +=3D (nblocks - rem) * SHA512_BLOCK_SIZE; nblocks =3D rem; } while (nblocks); } else { - sha512_blocks_arch(state, data, nblocks); + sha512_block_data_order(state, data, nblocks); } } =20 #ifdef CONFIG_KERNEL_MODE_NEON #define sha512_mod_init_arch sha512_mod_init_arch diff --git a/lib/crypto/riscv/Kconfig b/lib/crypto/riscv/Kconfig index 47c99ea97ce2c..c100571feb7e8 100644 --- a/lib/crypto/riscv/Kconfig +++ b/lib/crypto/riscv/Kconfig @@ -10,7 +10,6 @@ config CRYPTO_CHACHA_RISCV64 config CRYPTO_SHA256_RISCV64 tristate depends on 64BIT && RISCV_ISA_V && TOOLCHAIN_HAS_VECTOR_CRYPTO default CRYPTO_LIB_SHA256 select CRYPTO_ARCH_HAVE_LIB_SHA256 - select CRYPTO_ARCH_HAVE_LIB_SHA256_SIMD select CRYPTO_LIB_SHA256_GENERIC diff --git a/lib/crypto/riscv/sha256.c b/lib/crypto/riscv/sha256.c index 71808397dff4c..aa77349d08f30 100644 --- a/lib/crypto/riscv/sha256.c +++ b/lib/crypto/riscv/sha256.c @@ -9,36 +9,30 @@ * Author: Jerry Shih */ =20 #include #include +#include #include #include =20 asmlinkage void sha256_transform_zvknha_or_zvknhb_zvkb( u32 state[SHA256_STATE_WORDS], const u8 *data, size_t nblocks); =20 static __ro_after_init DEFINE_STATIC_KEY_FALSE(have_extensions); =20 -void sha256_blocks_simd(u32 state[SHA256_STATE_WORDS], +void sha256_blocks_arch(u32 state[SHA256_STATE_WORDS], const u8 *data, size_t nblocks) { - if (static_branch_likely(&have_extensions)) { + if (static_branch_likely(&have_extensions) && crypto_simd_usable()) { kernel_vector_begin(); sha256_transform_zvknha_or_zvknhb_zvkb(state, data, nblocks); kernel_vector_end(); } else { sha256_blocks_generic(state, data, nblocks); } } -EXPORT_SYMBOL_GPL(sha256_blocks_simd); - -void sha256_blocks_arch(u32 state[SHA256_STATE_WORDS], - const u8 *data, size_t nblocks) -{ - sha256_blocks_generic(state, data, nblocks); -} EXPORT_SYMBOL_GPL(sha256_blocks_arch); =20 bool sha256_is_arch_optimized(void) { return static_key_enabled(&have_extensions); diff --git a/lib/crypto/x86/Kconfig b/lib/crypto/x86/Kconfig index 5e94cdee492c2..e344579db3d85 100644 --- a/lib/crypto/x86/Kconfig +++ b/lib/crypto/x86/Kconfig @@ -28,7 +28,6 @@ config CRYPTO_POLY1305_X86_64 config CRYPTO_SHA256_X86_64 tristate depends on 64BIT default CRYPTO_LIB_SHA256 select CRYPTO_ARCH_HAVE_LIB_SHA256 - select CRYPTO_ARCH_HAVE_LIB_SHA256_SIMD select CRYPTO_LIB_SHA256_GENERIC diff --git a/lib/crypto/x86/sha256.c b/lib/crypto/x86/sha256.c index 80380f8fdcee4..baba74d7d26f2 100644 --- a/lib/crypto/x86/sha256.c +++ b/lib/crypto/x86/sha256.c @@ -4,10 +4,11 @@ * * Copyright 2025 Google LLC */ #include #include +#include #include #include #include =20 asmlinkage void sha256_transform_ssse3(u32 state[SHA256_STATE_WORDS], @@ -21,28 +22,21 @@ asmlinkage void sha256_ni_transform(u32 state[SHA256_ST= ATE_WORDS], =20 static __ro_after_init DEFINE_STATIC_KEY_FALSE(have_sha256_x86); =20 DEFINE_STATIC_CALL(sha256_blocks_x86, sha256_transform_ssse3); =20 -void sha256_blocks_simd(u32 state[SHA256_STATE_WORDS], +void sha256_blocks_arch(u32 state[SHA256_STATE_WORDS], const u8 *data, size_t nblocks) { - if (static_branch_likely(&have_sha256_x86)) { + if (static_branch_likely(&have_sha256_x86) && crypto_simd_usable()) { kernel_fpu_begin(); static_call(sha256_blocks_x86)(state, data, nblocks); kernel_fpu_end(); } else { sha256_blocks_generic(state, data, nblocks); } } -EXPORT_SYMBOL_GPL(sha256_blocks_simd); - -void sha256_blocks_arch(u32 state[SHA256_STATE_WORDS], - const u8 *data, size_t nblocks) -{ - sha256_blocks_generic(state, data, nblocks); -} EXPORT_SYMBOL_GPL(sha256_blocks_arch); =20 bool sha256_is_arch_optimized(void) { return static_key_enabled(&have_sha256_x86); --=20 2.50.0 From nobody Sun Sep 7 11:31:36 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id E76A0215062; Wed, 25 Jun 2025 07:10:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750835432; cv=none; b=Syj7TniiqNHCGRolwbdPxyJgqPDKlbGWs54DhqlrcFkBch47eM3YFT6QkvkLqE9qCxYc35Nxc8hvPY5UMXYaVYJxTRvZI/+FRffT/MU60Rfqp8XnCVvC2AlCNRzREADVlb+dIqiVA50Ix/dYnB/VWSJ6peENwIivE3o+31NfOqQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750835432; c=relaxed/simple; bh=Zeed4/7J1MAgm4uZ7/OtehAGBMIbM4x8fqn9JGGd45g=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=fCe+DKgnCdhII3M3OE5qDTPp3cMA1iisy4wieX+0uzRywKlDwOBm06B/LV9u2EtFrL18FpHhJ6/qVSHpX368APgPgsye6PPaB7Qwhik8F+pHKHAdYp4IXC819UQMPPCg+TxxBH3BJcTGw1kOclC3z2Gnhfg85QcDMGDFD9YdrsI= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=MieT5Uj1; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="MieT5Uj1" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2662DC116B1; Wed, 25 Jun 2025 07:10:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1750835431; bh=Zeed4/7J1MAgm4uZ7/OtehAGBMIbM4x8fqn9JGGd45g=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=MieT5Uj1Ihmp2UzLvUIECVvfhjk3XRQ+K6PX9V8lIBOWFnkgYKRzwd0E+mvcxKkHj 19idi9jXV7eRt+u/ctsOrGBbUX0KEZPV4Yqlr/wUuRLD5RXeNVnd3ztGGL0yDg4VZ3 NWARw2GjVQOG9URFFYU3D1CSUx3UZqvkFB6oXi8nQbOruRj4I+CF2PIa4gmuuYIh2s IBYHZ6Sq2S4J951FOW9I20ct1vHE6hNGWiYGsYgHVhfhGBtBde6iuHOBFNAko+e3pO sj2/YhXEbYitj7xuTJe6Pss/9dxrb6Sr79+OPLcr+Nz3u4AmjUc8Lgta+bpUxsbEVt gYFnuHNLcMl4w== From: Eric Biggers To: linux-crypto@vger.kernel.org Cc: linux-kernel@vger.kernel.org, Ard Biesheuvel , "Jason A . Donenfeld" , linux-arm-kernel@lists.infradead.org, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, sparclinux@vger.kernel.org, x86@kernel.org, Eric Biggers Subject: [PATCH 09/18] lib/crypto: sha256: Add sha224() and sha224_update() Date: Wed, 25 Jun 2025 00:08:10 -0700 Message-ID: <20250625070819.1496119-10-ebiggers@kernel.org> X-Mailer: git-send-email 2.50.0 In-Reply-To: <20250625070819.1496119-1-ebiggers@kernel.org> References: <20250625070819.1496119-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add a one-shot SHA-224 computation function sha224(), for consistency with sha256(), sha384(), and sha512() which all already exist. Similarly, add sha224_update(). While for now it's identical to sha256_update(), omitting it makes the API harder to use since users have to "know" which functions are the same between SHA-224 and SHA-256. Also, this is a prerequisite for using different context types for each. Signed-off-by: Eric Biggers --- include/crypto/sha2.h | 10 ++++++++-- lib/crypto/sha256.c | 10 ++++++++++ lib/crypto/tests/sha224_kunit.c | 13 +------------ 3 files changed, 19 insertions(+), 14 deletions(-) diff --git a/include/crypto/sha2.h b/include/crypto/sha2.h index bb181b7996cdc..e31da0743a522 100644 --- a/include/crypto/sha2.h +++ b/include/crypto/sha2.h @@ -112,22 +112,28 @@ struct sha512_state { u64 state[SHA512_DIGEST_SIZE / 8]; u64 count[2]; u8 buf[SHA512_BLOCK_SIZE]; }; =20 +void sha256_update(struct sha256_state *sctx, const u8 *data, size_t len); + static inline void sha224_init(struct sha256_state *sctx) { sha224_block_init(&sctx->ctx); } -/* Simply use sha256_update as it is equivalent to sha224_update. */ +static inline void sha224_update(struct sha256_state *sctx, + const u8 *data, size_t len) +{ + sha256_update(sctx, data, len); +} void sha224_final(struct sha256_state *sctx, u8 out[SHA224_DIGEST_SIZE]); +void sha224(const u8 *data, size_t len, u8 out[SHA224_DIGEST_SIZE]); =20 static inline void sha256_init(struct sha256_state *sctx) { sha256_block_init(&sctx->ctx); } -void sha256_update(struct sha256_state *sctx, const u8 *data, size_t len); void sha256_final(struct sha256_state *sctx, u8 out[SHA256_DIGEST_SIZE]); void sha256(const u8 *data, size_t len, u8 out[SHA256_DIGEST_SIZE]); =20 /* State for the SHA-512 (and SHA-384) compression function */ struct sha512_block_state { diff --git a/lib/crypto/sha256.c b/lib/crypto/sha256.c index 573ccecbf48bf..ccaae70880166 100644 --- a/lib/crypto/sha256.c +++ b/lib/crypto/sha256.c @@ -68,10 +68,20 @@ void sha256_final(struct sha256_state *sctx, u8 out[SHA= 256_DIGEST_SIZE]) { __sha256_final(sctx, out, SHA256_DIGEST_SIZE); } EXPORT_SYMBOL(sha256_final); =20 +void sha224(const u8 *data, size_t len, u8 out[SHA224_DIGEST_SIZE]) +{ + struct sha256_state sctx; + + sha224_init(&sctx); + sha224_update(&sctx, data, len); + sha224_final(&sctx, out); +} +EXPORT_SYMBOL(sha224); + void sha256(const u8 *data, size_t len, u8 out[SHA256_DIGEST_SIZE]) { struct sha256_state sctx; =20 sha256_init(&sctx); diff --git a/lib/crypto/tests/sha224_kunit.c b/lib/crypto/tests/sha224_kuni= t.c index 5015861a55112..c484c1d4a2a5e 100644 --- a/lib/crypto/tests/sha224_kunit.c +++ b/lib/crypto/tests/sha224_kunit.c @@ -3,26 +3,15 @@ * Copyright 2025 Google LLC */ #include #include "sha224-testvecs.h" =20 -/* TODO: add sha224() to the library itself */ -static inline void sha224(const u8 *data, size_t len, - u8 out[SHA224_DIGEST_SIZE]) -{ - struct sha256_state state; - - sha224_init(&state); - sha256_update(&state, data, len); - sha224_final(&state, out); -} - #define HASH sha224 #define HASH_CTX sha256_state #define HASH_SIZE SHA224_DIGEST_SIZE #define HASH_INIT sha224_init -#define HASH_UPDATE sha256_update +#define HASH_UPDATE sha224_update #define HASH_FINAL sha224_final #define HASH_TESTVECS sha224_testvecs /* TODO: add HMAC-SHA224 support to the library, then enable the tests for= it */ #include "hash-test-template.h" =20 --=20 2.50.0 From nobody Sun Sep 7 11:31:36 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 61750217716; Wed, 25 Jun 2025 07:10:32 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750835432; cv=none; b=rIBSKewplPk8fE2cxBpqB864LXJozoeeKYIGmrGCGmTjZkd+2ypTHQ0TW4cHJERdPZPhOyrKW0hE+otOpsH+oKD0XZsI2fZKHkIpKVBTNHEszLX9a9W8bq/BKKgBc0+6iD2fluIdwqjct1nFJ6q8KqEtXHpn6CRWvo91NuZKtsc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750835432; c=relaxed/simple; bh=Hh/Sw260uausVAnsEHSDls8+so63g7p9eoQy//Tp+Iw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=oI5aYN/YQG02bZlyugxzVP/qs9Iq5hMXXKIGYwEcJ1SpESrayabcWlBxzxoF9VoyKTBNSrJjAol6jhvU4/yGi9Na7zfRLAM47tpdJQN8nLHfH6hiepfNRpNF6/Z7230pXYhb7poFi9+nvWPQnTg4XORkV8TnEqH5845GK2pLb94= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=IintHt4x; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="IintHt4x" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9C9B6C4CEEA; Wed, 25 Jun 2025 07:10:31 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1750835432; bh=Hh/Sw260uausVAnsEHSDls8+so63g7p9eoQy//Tp+Iw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=IintHt4xYrONrErz5gULx+aQpGTFdRQZgKaxHo2TBZz0KhgHiT53BGwlh+7+LxYPA CphhFvrI0Gq8mhPyqwuVNhqe0ErjZ+hQPZ1bEsLSq9zLz4iNM3wCptfCCu0KKWZT6f Db0R7WjDI2P4kkYpj0i7oNfNNa2jRS/N8xn1ehHFiF9xgIfNWv6CZvuOi9Gqp1z0WI O6RbggXxzxVjSUSpX60PR0aCZ7vb39kwFTWuIfpnsvtjgWNfr2KAi3/gmv8khAOe7F hSkMMkb3oxird6zkkkuLNSXN2D55XXDIFXaOXqUZh4IbsDD54d8HQqsQ2hdk0I2mOa phW/pJ04sb2Vg== From: Eric Biggers To: linux-crypto@vger.kernel.org Cc: linux-kernel@vger.kernel.org, Ard Biesheuvel , "Jason A . Donenfeld" , linux-arm-kernel@lists.infradead.org, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, sparclinux@vger.kernel.org, x86@kernel.org, Eric Biggers Subject: [PATCH 10/18] lib/crypto: sha256: Make library API use strongly-typed contexts Date: Wed, 25 Jun 2025 00:08:11 -0700 Message-ID: <20250625070819.1496119-11-ebiggers@kernel.org> X-Mailer: git-send-email 2.50.0 In-Reply-To: <20250625070819.1496119-1-ebiggers@kernel.org> References: <20250625070819.1496119-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Currently the SHA-224 and SHA-256 library functions can be mixed arbitrarily, even in ways that are incorrect, for example using sha224_init() and sha256_final(). This is because they operate on the same structure, sha256_state. Introduce stronger typing, as I did for SHA-384 and SHA-512. Also as I did for SHA-384 and SHA-512, use the names *_ctx instead of *_state. The *_ctx names have the following small benefits: - They're shorter. - They avoid an ambiguity with the compression function state. - They're consistent with the well-known OpenSSL API. - Users usually name the variable 'sctx' anyway, which suggests that *_ctx would be the more natural name for the actual struct. Therefore: update the SHA-224 and SHA-256 APIs, implementation, and calling code accordingly. In the new structs, also strongly-type the compression function state. Signed-off-by: Eric Biggers --- arch/riscv/purgatory/purgatory.c | 8 +-- arch/s390/purgatory/purgatory.c | 2 +- arch/x86/purgatory/purgatory.c | 2 +- crypto/sha256.c | 16 ++--- drivers/char/tpm/tpm2-sessions.c | 12 ++-- include/crypto/sha2.h | 52 ++++++++++++---- kernel/kexec_file.c | 10 ++-- lib/crypto/sha256.c | 100 ++++++++++++++++++++++--------- lib/crypto/tests/sha224_kunit.c | 2 +- lib/crypto/tests/sha256_kunit.c | 2 +- 10 files changed, 141 insertions(+), 65 deletions(-) diff --git a/arch/riscv/purgatory/purgatory.c b/arch/riscv/purgatory/purgat= ory.c index 80596ab5fb622..bbd5cfa4d7412 100644 --- a/arch/riscv/purgatory/purgatory.c +++ b/arch/riscv/purgatory/purgatory.c @@ -18,18 +18,18 @@ u8 purgatory_sha256_digest[SHA256_DIGEST_SIZE] __sectio= n(".kexec-purgatory"); struct kexec_sha_region purgatory_sha_regions[KEXEC_SEGMENT_MAX] __section= (".kexec-purgatory"); =20 static int verify_sha256_digest(void) { struct kexec_sha_region *ptr, *end; - struct sha256_state ss; + struct sha256_ctx sctx; u8 digest[SHA256_DIGEST_SIZE]; =20 - sha256_init(&ss); + sha256_init(&sctx); end =3D purgatory_sha_regions + ARRAY_SIZE(purgatory_sha_regions); for (ptr =3D purgatory_sha_regions; ptr < end; ptr++) - sha256_update(&ss, (uint8_t *)(ptr->start), ptr->len); - sha256_final(&ss, digest); + sha256_update(&sctx, (uint8_t *)(ptr->start), ptr->len); + sha256_final(&sctx, digest); if (memcmp(digest, purgatory_sha256_digest, sizeof(digest)) !=3D 0) return 1; return 0; } =20 diff --git a/arch/s390/purgatory/purgatory.c b/arch/s390/purgatory/purgator= y.c index 030efda05dbe5..ecb38102187c2 100644 --- a/arch/s390/purgatory/purgatory.c +++ b/arch/s390/purgatory/purgatory.c @@ -14,11 +14,11 @@ =20 int verify_sha256_digest(void) { struct kexec_sha_region *ptr, *end; u8 digest[SHA256_DIGEST_SIZE]; - struct sha256_state sctx; + struct sha256_ctx sctx; =20 sha256_init(&sctx); end =3D purgatory_sha_regions + ARRAY_SIZE(purgatory_sha_regions); =20 for (ptr =3D purgatory_sha_regions; ptr < end; ptr++) diff --git a/arch/x86/purgatory/purgatory.c b/arch/x86/purgatory/purgatory.c index aea47e7939637..655139dd05325 100644 --- a/arch/x86/purgatory/purgatory.c +++ b/arch/x86/purgatory/purgatory.c @@ -23,11 +23,11 @@ struct kexec_sha_region purgatory_sha_regions[KEXEC_SEG= MENT_MAX] __section(".kex =20 static int verify_sha256_digest(void) { struct kexec_sha_region *ptr, *end; u8 digest[SHA256_DIGEST_SIZE]; - struct sha256_state sctx; + struct sha256_ctx sctx; =20 sha256_init(&sctx); end =3D purgatory_sha_regions + ARRAY_SIZE(purgatory_sha_regions); =20 for (ptr =3D purgatory_sha_regions; ptr < end; ptr++) diff --git a/crypto/sha256.c b/crypto/sha256.c index 4aeb213bab117..15c57fba256b7 100644 --- a/crypto/sha256.c +++ b/crypto/sha256.c @@ -135,28 +135,28 @@ static int crypto_sha224_final_lib(struct shash_desc = *desc, u8 *out) return 0; } =20 static int crypto_sha256_import_lib(struct shash_desc *desc, const void *i= n) { - struct sha256_state *sctx =3D shash_desc_ctx(desc); + struct __sha256_ctx *sctx =3D shash_desc_ctx(desc); const u8 *p =3D in; =20 memcpy(sctx, p, sizeof(*sctx)); p +=3D sizeof(*sctx); - sctx->count +=3D *p; + sctx->bytecount +=3D *p; return 0; } =20 static int crypto_sha256_export_lib(struct shash_desc *desc, void *out) { - struct sha256_state *sctx0 =3D shash_desc_ctx(desc); - struct sha256_state sctx =3D *sctx0; + struct __sha256_ctx *sctx0 =3D shash_desc_ctx(desc); + struct __sha256_ctx sctx =3D *sctx0; unsigned int partial; u8 *p =3D out; =20 - partial =3D sctx.count % SHA256_BLOCK_SIZE; - sctx.count -=3D partial; + partial =3D sctx.bytecount % SHA256_BLOCK_SIZE; + sctx.bytecount -=3D partial; memcpy(p, &sctx, sizeof(sctx)); p +=3D sizeof(sctx); *p =3D partial; return 0; } @@ -199,11 +199,11 @@ static struct shash_alg algs[] =3D { .digestsize =3D SHA256_DIGEST_SIZE, .init =3D crypto_sha256_init, .update =3D crypto_sha256_update_lib, .final =3D crypto_sha256_final_lib, .digest =3D crypto_sha256_digest_lib, - .descsize =3D sizeof(struct sha256_state), + .descsize =3D sizeof(struct sha256_ctx), .statesize =3D sizeof(struct crypto_sha256_state) + SHA256_BLOCK_SIZE + 1, .import =3D crypto_sha256_import_lib, .export =3D crypto_sha256_export_lib, }, @@ -214,11 +214,11 @@ static struct shash_alg algs[] =3D { .base.cra_module =3D THIS_MODULE, .digestsize =3D SHA224_DIGEST_SIZE, .init =3D crypto_sha224_init, .update =3D crypto_sha256_update_lib, .final =3D crypto_sha224_final_lib, - .descsize =3D sizeof(struct sha256_state), + .descsize =3D sizeof(struct sha224_ctx), .statesize =3D sizeof(struct crypto_sha256_state) + SHA256_BLOCK_SIZE + 1, .import =3D crypto_sha256_import_lib, .export =3D crypto_sha256_export_lib, }, diff --git a/drivers/char/tpm/tpm2-sessions.c b/drivers/char/tpm/tpm2-sessi= ons.c index 7b5049b3d476e..bdb119453dfbe 100644 --- a/drivers/char/tpm/tpm2-sessions.c +++ b/drivers/char/tpm/tpm2-sessions.c @@ -388,11 +388,11 @@ static int tpm2_create_primary(struct tpm_chip *chip,= u32 hierarchy, * It turns out the crypto hmac(sha256) is hard for us to consume * because it assumes a fixed key and the TPM seems to change the key * on every operation, so we weld the hmac init and final functions in * here to give it the same usage characteristics as a regular hash */ -static void tpm2_hmac_init(struct sha256_state *sctx, u8 *key, u32 key_len) +static void tpm2_hmac_init(struct sha256_ctx *sctx, u8 *key, u32 key_len) { u8 pad[SHA256_BLOCK_SIZE]; int i; =20 sha256_init(sctx); @@ -404,11 +404,11 @@ static void tpm2_hmac_init(struct sha256_state *sctx,= u8 *key, u32 key_len) pad[i] ^=3D HMAC_IPAD_VALUE; } sha256_update(sctx, pad, sizeof(pad)); } =20 -static void tpm2_hmac_final(struct sha256_state *sctx, u8 *key, u32 key_le= n, +static void tpm2_hmac_final(struct sha256_ctx *sctx, u8 *key, u32 key_len, u8 *out) { u8 pad[SHA256_BLOCK_SIZE]; int i; =20 @@ -438,11 +438,11 @@ static void tpm2_KDFa(u8 *key, u32 key_len, const cha= r *label, u8 *u, { u32 counter =3D 1; const __be32 bits =3D cpu_to_be32(bytes * 8); =20 while (bytes > 0) { - struct sha256_state sctx; + struct sha256_ctx sctx; __be32 c =3D cpu_to_be32(counter); =20 tpm2_hmac_init(&sctx, key, key_len); sha256_update(&sctx, (u8 *)&c, sizeof(c)); sha256_update(&sctx, label, strlen(label)+1); @@ -465,11 +465,11 @@ static void tpm2_KDFa(u8 *key, u32 key_len, const cha= r *label, u8 *u, * in this KDF. */ static void tpm2_KDFe(u8 z[EC_PT_SZ], const char *str, u8 *pt_u, u8 *pt_v, u8 *out) { - struct sha256_state sctx; + struct sha256_ctx sctx; /* * this should be an iterative counter, but because we know * we're only taking 32 bytes for the point using a sha256 * hash which is also 32 bytes, there's only one loop */ @@ -590,11 +590,11 @@ void tpm_buf_fill_hmac_session(struct tpm_chip *chip,= struct tpm_buf *buf) struct tpm_header *head =3D (struct tpm_header *)buf->data; off_t offset_s =3D TPM_HEADER_SIZE, offset_p; u8 *hmac =3D NULL; u32 attrs; u8 cphash[SHA256_DIGEST_SIZE]; - struct sha256_state sctx; + struct sha256_ctx sctx; =20 if (!auth) return; =20 /* save the command code in BE format */ @@ -748,11 +748,11 @@ int tpm_buf_check_hmac_response(struct tpm_chip *chip= , struct tpm_buf *buf, struct tpm_header *head =3D (struct tpm_header *)buf->data; struct tpm2_auth *auth =3D chip->auth; off_t offset_s, offset_p; u8 rphash[SHA256_DIGEST_SIZE]; u32 attrs, cc; - struct sha256_state sctx; + struct sha256_ctx sctx; u16 tag =3D be16_to_cpu(head->tag); int parm_len, len, i, handles; =20 if (!auth) return rc; diff --git a/include/crypto/sha2.h b/include/crypto/sha2.h index e31da0743a522..18e1eec841b71 100644 --- a/include/crypto/sha2.h +++ b/include/crypto/sha2.h @@ -112,29 +112,59 @@ struct sha512_state { u64 state[SHA512_DIGEST_SIZE / 8]; u64 count[2]; u8 buf[SHA512_BLOCK_SIZE]; }; =20 -void sha256_update(struct sha256_state *sctx, const u8 *data, size_t len); +/* State for the SHA-256 (and SHA-224) compression function */ +struct sha256_block_state { + u32 h[SHA256_STATE_WORDS]; +}; =20 -static inline void sha224_init(struct sha256_state *sctx) -{ - sha224_block_init(&sctx->ctx); -} -static inline void sha224_update(struct sha256_state *sctx, +/* + * Context structure, shared by SHA-224 and SHA-256. The sha224_ctx and + * sha256_ctx structs wrap this one so that the API has proper typing and + * doesn't allow mixing the SHA-224 and SHA-256 functions arbitrarily. + */ +struct __sha256_ctx { + struct sha256_block_state state; + u64 bytecount; + u8 buf[SHA256_BLOCK_SIZE] __aligned(__alignof__(__be64)); +}; +void __sha256_update(struct __sha256_ctx *ctx, const u8 *data, size_t len); + +/** + * struct sha224_ctx - Context for hashing a message with SHA-224 + * @ctx: private + */ +struct sha224_ctx { + struct __sha256_ctx ctx; +}; + +void sha224_init(struct sha224_ctx *ctx); +static inline void sha224_update(struct sha224_ctx *ctx, const u8 *data, size_t len) { - sha256_update(sctx, data, len); + __sha256_update(&ctx->ctx, data, len); } -void sha224_final(struct sha256_state *sctx, u8 out[SHA224_DIGEST_SIZE]); +void sha224_final(struct sha224_ctx *ctx, u8 out[SHA224_DIGEST_SIZE]); void sha224(const u8 *data, size_t len, u8 out[SHA224_DIGEST_SIZE]); =20 -static inline void sha256_init(struct sha256_state *sctx) +/** + * struct sha256_ctx - Context for hashing a message with SHA-256 + * @ctx: private + */ +struct sha256_ctx { + struct __sha256_ctx ctx; +}; + +void sha256_init(struct sha256_ctx *ctx); +static inline void sha256_update(struct sha256_ctx *ctx, + const u8 *data, size_t len) { - sha256_block_init(&sctx->ctx); + __sha256_update(&ctx->ctx, data, len); } -void sha256_final(struct sha256_state *sctx, u8 out[SHA256_DIGEST_SIZE]); +void sha256_final(struct sha256_ctx *ctx, u8 out[SHA256_DIGEST_SIZE]); void sha256(const u8 *data, size_t len, u8 out[SHA256_DIGEST_SIZE]); =20 /* State for the SHA-512 (and SHA-384) compression function */ struct sha512_block_state { u64 h[8]; diff --git a/kernel/kexec_file.c b/kernel/kexec_file.c index 69fe76fd92334..b835033c65eb1 100644 --- a/kernel/kexec_file.c +++ b/kernel/kexec_file.c @@ -749,11 +749,11 @@ int kexec_add_buffer(struct kexec_buf *kbuf) } =20 /* Calculate and store the digest of segments */ static int kexec_calculate_store_digests(struct kimage *image) { - struct sha256_state state; + struct sha256_ctx sctx; int ret =3D 0, i, j, zero_buf_sz, sha_region_sz; size_t nullsz; u8 digest[SHA256_DIGEST_SIZE]; void *zero_buf; struct kexec_sha_region *sha_regions; @@ -768,11 +768,11 @@ static int kexec_calculate_store_digests(struct kimag= e *image) sha_region_sz =3D KEXEC_SEGMENT_MAX * sizeof(struct kexec_sha_region); sha_regions =3D vzalloc(sha_region_sz); if (!sha_regions) return -ENOMEM; =20 - sha256_init(&state); + sha256_init(&sctx); =20 for (j =3D i =3D 0; i < image->nr_segments; i++) { struct kexec_segment *ksegment; =20 #ifdef CONFIG_CRASH_HOTPLUG @@ -794,11 +794,11 @@ static int kexec_calculate_store_digests(struct kimag= e *image) * the current index */ if (check_ima_segment_index(image, i)) continue; =20 - sha256_update(&state, ksegment->kbuf, ksegment->bufsz); + sha256_update(&sctx, ksegment->kbuf, ksegment->bufsz); =20 /* * Assume rest of the buffer is filled with zero and * update digest accordingly. */ @@ -806,20 +806,20 @@ static int kexec_calculate_store_digests(struct kimag= e *image) while (nullsz) { unsigned long bytes =3D nullsz; =20 if (bytes > zero_buf_sz) bytes =3D zero_buf_sz; - sha256_update(&state, zero_buf, bytes); + sha256_update(&sctx, zero_buf, bytes); nullsz -=3D bytes; } =20 sha_regions[j].start =3D ksegment->mem; sha_regions[j].len =3D ksegment->memsz; j++; } =20 - sha256_final(&state, digest); + sha256_final(&sctx, digest); =20 ret =3D kexec_purgatory_get_set_symbol(image, "purgatory_sha_regions", sha_regions, sha_region_sz, 0); if (ret) goto out_free_sha_regions; diff --git a/lib/crypto/sha256.c b/lib/crypto/sha256.c index ccaae70880166..3e7797a4489de 100644 --- a/lib/crypto/sha256.c +++ b/lib/crypto/sha256.c @@ -16,10 +16,24 @@ #include #include #include #include =20 +static const struct sha256_block_state sha224_iv =3D { + .h =3D { + SHA224_H0, SHA224_H1, SHA224_H2, SHA224_H3, + SHA224_H4, SHA224_H5, SHA224_H6, SHA224_H7, + }, +}; + +static const struct sha256_block_state sha256_iv =3D { + .h =3D { + SHA256_H0, SHA256_H1, SHA256_H2, SHA256_H3, + SHA256_H4, SHA256_H5, SHA256_H6, SHA256_H7, + }, +}; + /* * If __DISABLE_EXPORTS is defined, then this file is being compiled for a * pre-boot environment. In that case, ignore the kconfig options, pull t= he * generic code into the same translation unit, and use that only. */ @@ -30,65 +44,97 @@ static inline bool sha256_purgatory(void) { return __is_defined(__DISABLE_EXPORTS); } =20 -static inline void sha256_blocks(u32 state[SHA256_STATE_WORDS], const u8 *= data, - size_t nblocks) +static inline void sha256_blocks(struct sha256_block_state *state, + const u8 *data, size_t nblocks) +{ + sha256_choose_blocks(state->h, data, nblocks, sha256_purgatory(), false); +} + +static void __sha256_init(struct __sha256_ctx *ctx, + const struct sha256_block_state *iv, + u64 initial_bytecount) +{ + ctx->state =3D *iv; + ctx->bytecount =3D initial_bytecount; +} + +void sha224_init(struct sha224_ctx *ctx) +{ + __sha256_init(&ctx->ctx, &sha224_iv, 0); +} +EXPORT_SYMBOL_GPL(sha224_init); + +void sha256_init(struct sha256_ctx *ctx) { - sha256_choose_blocks(state, data, nblocks, sha256_purgatory(), false); + __sha256_init(&ctx->ctx, &sha256_iv, 0); } +EXPORT_SYMBOL_GPL(sha256_init); =20 -void sha256_update(struct sha256_state *sctx, const u8 *data, size_t len) +void __sha256_update(struct __sha256_ctx *ctx, const u8 *data, size_t len) { - size_t partial =3D sctx->count % SHA256_BLOCK_SIZE; + size_t partial =3D ctx->bytecount % SHA256_BLOCK_SIZE; =20 - sctx->count +=3D len; - BLOCK_HASH_UPDATE_BLOCKS(sha256_blocks, sctx->ctx.state, data, len, - SHA256_BLOCK_SIZE, sctx->buf, partial); + ctx->bytecount +=3D len; + BLOCK_HASH_UPDATE_BLOCKS(sha256_blocks, &ctx->state, data, len, + SHA256_BLOCK_SIZE, ctx->buf, partial); } -EXPORT_SYMBOL(sha256_update); +EXPORT_SYMBOL(__sha256_update); =20 -static inline void __sha256_final(struct sha256_state *sctx, u8 *out, - size_t digest_size) +static void __sha256_final(struct __sha256_ctx *ctx, + u8 *out, size_t digest_size) { - size_t partial =3D sctx->count % SHA256_BLOCK_SIZE; + u64 bitcount =3D ctx->bytecount << 3; + size_t partial =3D ctx->bytecount % SHA256_BLOCK_SIZE; + + ctx->buf[partial++] =3D 0x80; + if (partial > SHA256_BLOCK_SIZE - 8) { + memset(&ctx->buf[partial], 0, SHA256_BLOCK_SIZE - partial); + sha256_blocks(&ctx->state, ctx->buf, 1); + partial =3D 0; + } + memset(&ctx->buf[partial], 0, SHA256_BLOCK_SIZE - 8 - partial); + *(__be64 *)&ctx->buf[SHA256_BLOCK_SIZE - 8] =3D cpu_to_be64(bitcount); + sha256_blocks(&ctx->state, ctx->buf, 1); =20 - sha256_finup(&sctx->ctx, sctx->buf, partial, out, digest_size, - sha256_purgatory(), false); - memzero_explicit(sctx, sizeof(*sctx)); + for (size_t i =3D 0; i < digest_size; i +=3D 4) + put_unaligned_be32(ctx->state.h[i / 4], out + i); } =20 -void sha224_final(struct sha256_state *sctx, u8 out[SHA224_DIGEST_SIZE]) +void sha224_final(struct sha224_ctx *ctx, u8 out[SHA224_DIGEST_SIZE]) { - __sha256_final(sctx, out, SHA224_DIGEST_SIZE); + __sha256_final(&ctx->ctx, out, SHA224_DIGEST_SIZE); + memzero_explicit(ctx, sizeof(*ctx)); } EXPORT_SYMBOL(sha224_final); =20 -void sha256_final(struct sha256_state *sctx, u8 out[SHA256_DIGEST_SIZE]) +void sha256_final(struct sha256_ctx *ctx, u8 out[SHA256_DIGEST_SIZE]) { - __sha256_final(sctx, out, SHA256_DIGEST_SIZE); + __sha256_final(&ctx->ctx, out, SHA256_DIGEST_SIZE); + memzero_explicit(ctx, sizeof(*ctx)); } EXPORT_SYMBOL(sha256_final); =20 void sha224(const u8 *data, size_t len, u8 out[SHA224_DIGEST_SIZE]) { - struct sha256_state sctx; + struct sha224_ctx ctx; =20 - sha224_init(&sctx); - sha224_update(&sctx, data, len); - sha224_final(&sctx, out); + sha224_init(&ctx); + sha224_update(&ctx, data, len); + sha224_final(&ctx, out); } EXPORT_SYMBOL(sha224); =20 void sha256(const u8 *data, size_t len, u8 out[SHA256_DIGEST_SIZE]) { - struct sha256_state sctx; + struct sha256_ctx ctx; =20 - sha256_init(&sctx); - sha256_update(&sctx, data, len); - sha256_final(&sctx, out); + sha256_init(&ctx); + sha256_update(&ctx, data, len); + sha256_final(&ctx, out); } EXPORT_SYMBOL(sha256); =20 MODULE_DESCRIPTION("SHA-256 Algorithm"); MODULE_LICENSE("GPL"); diff --git a/lib/crypto/tests/sha224_kunit.c b/lib/crypto/tests/sha224_kuni= t.c index c484c1d4a2a5e..2ae0f83b0ae72 100644 --- a/lib/crypto/tests/sha224_kunit.c +++ b/lib/crypto/tests/sha224_kunit.c @@ -4,11 +4,11 @@ */ #include #include "sha224-testvecs.h" =20 #define HASH sha224 -#define HASH_CTX sha256_state +#define HASH_CTX sha224_ctx #define HASH_SIZE SHA224_DIGEST_SIZE #define HASH_INIT sha224_init #define HASH_UPDATE sha224_update #define HASH_FINAL sha224_final #define HASH_TESTVECS sha224_testvecs diff --git a/lib/crypto/tests/sha256_kunit.c b/lib/crypto/tests/sha256_kuni= t.c index 4002acfbe66b0..7fe12f3c68bfd 100644 --- a/lib/crypto/tests/sha256_kunit.c +++ b/lib/crypto/tests/sha256_kunit.c @@ -4,11 +4,11 @@ */ #include #include "sha256-testvecs.h" =20 #define HASH sha256 -#define HASH_CTX sha256_state +#define HASH_CTX sha256_ctx #define HASH_SIZE SHA256_DIGEST_SIZE #define HASH_INIT sha256_init #define HASH_UPDATE sha256_update #define HASH_FINAL sha256_final #define HASH_TESTVECS sha256_testvecs --=20 2.50.0 From nobody Sun Sep 7 11:31:36 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 90D0421858A; Wed, 25 Jun 2025 07:10:32 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750835432; cv=none; b=Bdujq3uvms5QF8d32zomBj34wyLhrHKLKj0/GAUxMtgRnEI+hvQdpXEaQ6B54LmcKGT8EDD7Dg+IDSfLxh0k6CKWgMPxY3QWl7vVeUmgpf4d7Vh9l/J9muqmvcfCaTMJ5RzKMOMplSrfmPKMQkPe43BJ7ws7QH2HjfQZalBRNZ4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750835432; c=relaxed/simple; bh=GMam131N1o/5e3I3F+I9ymjxueGAN0OrCeWnHp1EQDA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Qla+mMThYrIlTP5XKNsByj3/vUNOIsLaCNI5x+UYWUjPhv+qq4iv1bly8PeJkNsy8plm6R7fPmEqV6fdb/As4L3QDWnuOH0Uq15n4V68YTKRu7fYBTRGSwyUA2fglGU1JrhRylZlbMF9AL8rMbD+FwgIvjSUuWWScNfvW7gGDao= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=UZg5CVnO; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="UZg5CVnO" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 20A8FC4CEEE; Wed, 25 Jun 2025 07:10:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1750835432; bh=GMam131N1o/5e3I3F+I9ymjxueGAN0OrCeWnHp1EQDA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=UZg5CVnOW+bpgYExd5yE3LBDQF4l8SyP1jY0An82o+CEsahhSP60qCkUDAEeDZ9Qi wcM/LqGssYMo0UDtpeqKxTL15GlL6pdrfjimwOODY4lS8HABWXmlvnCZDJE0XK/7xO 1BGNtUoczmt7Q53+QLkX7mMxsapsC5XMd96eShPkVIiRxLVxVXXlpjMg19ExJ+CPgr dtGg7jUUv4zGum2btrBJTFSLWNSrDK51HD6P/eJKdK8xdIZz+FAU+S7PeN9OuV+r86 UvM8ut3Pzrcsfdotx6biYw6wt9ZiUoBGE/KsnwSNbeT72e/Bb2Y5N8JElE3zv1Cg4U X9kgYBl8dyFiA== From: Eric Biggers To: linux-crypto@vger.kernel.org Cc: linux-kernel@vger.kernel.org, Ard Biesheuvel , "Jason A . Donenfeld" , linux-arm-kernel@lists.infradead.org, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, sparclinux@vger.kernel.org, x86@kernel.org, Eric Biggers Subject: [PATCH 11/18] lib/crypto: sha256: Propagate sha256_block_state type to implementations Date: Wed, 25 Jun 2025 00:08:12 -0700 Message-ID: <20250625070819.1496119-12-ebiggers@kernel.org> X-Mailer: git-send-email 2.50.0 In-Reply-To: <20250625070819.1496119-1-ebiggers@kernel.org> References: <20250625070819.1496119-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The previous commit made the SHA-256 compression function state be strongly typed, but it wasn't propagated all the way down to the implementations of it. Do that now. Signed-off-by: Eric Biggers --- .../mips/cavium-octeon/crypto/octeon-sha256.c | 2 +- include/crypto/internal/sha2.h | 8 +++---- lib/crypto/arm/sha256-ce.S | 2 +- lib/crypto/arm/sha256.c | 8 +++---- lib/crypto/arm64/sha256-ce.S | 2 +- lib/crypto/arm64/sha256.c | 8 +++---- lib/crypto/powerpc/sha256.c | 2 +- .../sha256-riscv64-zvknha_or_zvknhb-zvkb.S | 2 +- lib/crypto/riscv/sha256.c | 7 +++--- lib/crypto/s390/sha256.c | 2 +- lib/crypto/sha256-generic.c | 24 ++++++++++++++----- lib/crypto/sparc/sha256.c | 4 ++-- lib/crypto/x86/sha256-avx-asm.S | 2 +- lib/crypto/x86/sha256-avx2-asm.S | 2 +- lib/crypto/x86/sha256-ni-asm.S | 2 +- lib/crypto/x86/sha256-ssse3-asm.S | 2 +- lib/crypto/x86/sha256.c | 10 ++++---- 17 files changed, 51 insertions(+), 38 deletions(-) diff --git a/arch/mips/cavium-octeon/crypto/octeon-sha256.c b/arch/mips/cav= ium-octeon/crypto/octeon-sha256.c index c20038239cb6b..f8664818d04ec 100644 --- a/arch/mips/cavium-octeon/crypto/octeon-sha256.c +++ b/arch/mips/cavium-octeon/crypto/octeon-sha256.c @@ -20,11 +20,11 @@ =20 /* * We pass everything as 64-bit. OCTEON can handle misaligned data. */ =20 -void sha256_blocks_arch(u32 state[SHA256_STATE_WORDS], +void sha256_blocks_arch(struct sha256_block_state *state, const u8 *data, size_t nblocks) { struct octeon_cop2_state cop2_state; u64 *state64 =3D (u64 *)state; unsigned long flags; diff --git a/include/crypto/internal/sha2.h b/include/crypto/internal/sha2.h index 79be22381ef86..51028484ccdc7 100644 --- a/include/crypto/internal/sha2.h +++ b/include/crypto/internal/sha2.h @@ -15,23 +15,23 @@ bool sha256_is_arch_optimized(void); static inline bool sha256_is_arch_optimized(void) { return false; } #endif -void sha256_blocks_generic(u32 state[SHA256_STATE_WORDS], +void sha256_blocks_generic(struct sha256_block_state *state, const u8 *data, size_t nblocks); -void sha256_blocks_arch(u32 state[SHA256_STATE_WORDS], +void sha256_blocks_arch(struct sha256_block_state *state, const u8 *data, size_t nblocks); =20 static inline void sha256_choose_blocks( u32 state[SHA256_STATE_WORDS], const u8 *data, size_t nblocks, bool force_generic, bool force_simd) { if (!IS_ENABLED(CONFIG_CRYPTO_ARCH_HAVE_LIB_SHA256) || force_generic) - sha256_blocks_generic(state, data, nblocks); + sha256_blocks_generic((struct sha256_block_state *)state, data, nblocks); else - sha256_blocks_arch(state, data, nblocks); + sha256_blocks_arch((struct sha256_block_state *)state, data, nblocks); } =20 static __always_inline void sha256_finup( struct crypto_sha256_state *sctx, u8 buf[SHA256_BLOCK_SIZE], size_t len, u8 out[SHA256_DIGEST_SIZE], size_t digest_size, diff --git a/lib/crypto/arm/sha256-ce.S b/lib/crypto/arm/sha256-ce.S index ac2c9b01b22d2..7481ac8e6c0d9 100644 --- a/lib/crypto/arm/sha256-ce.S +++ b/lib/crypto/arm/sha256-ce.S @@ -65,11 +65,11 @@ .word 0x391c0cb3, 0x4ed8aa4a, 0x5b9cca4f, 0x682e6ff3 .word 0x748f82ee, 0x78a5636f, 0x84c87814, 0x8cc70208 .word 0x90befffa, 0xa4506ceb, 0xbef9a3f7, 0xc67178f2 =20 /* - * void sha256_ce_transform(u32 state[SHA256_STATE_WORDS], + * void sha256_ce_transform(struct sha256_block_state *state, * const u8 *data, size_t nblocks); */ ENTRY(sha256_ce_transform) /* load state */ vld1.32 {dga-dgb}, [r0] diff --git a/lib/crypto/arm/sha256.c b/lib/crypto/arm/sha256.c index 2c9cfdaaa0691..7d90823586952 100644 --- a/lib/crypto/arm/sha256.c +++ b/lib/crypto/arm/sha256.c @@ -8,21 +8,21 @@ #include #include #include #include =20 -asmlinkage void sha256_block_data_order(u32 state[SHA256_STATE_WORDS], +asmlinkage void sha256_block_data_order(struct sha256_block_state *state, const u8 *data, size_t nblocks); -asmlinkage void sha256_block_data_order_neon(u32 state[SHA256_STATE_WORDS], +asmlinkage void sha256_block_data_order_neon(struct sha256_block_state *st= ate, const u8 *data, size_t nblocks); -asmlinkage void sha256_ce_transform(u32 state[SHA256_STATE_WORDS], +asmlinkage void sha256_ce_transform(struct sha256_block_state *state, const u8 *data, size_t nblocks); =20 static __ro_after_init DEFINE_STATIC_KEY_FALSE(have_neon); static __ro_after_init DEFINE_STATIC_KEY_FALSE(have_ce); =20 -void sha256_blocks_arch(u32 state[SHA256_STATE_WORDS], +void sha256_blocks_arch(struct sha256_block_state *state, const u8 *data, size_t nblocks) { if (IS_ENABLED(CONFIG_KERNEL_MODE_NEON) && static_branch_likely(&have_neon) && crypto_simd_usable()) { kernel_neon_begin(); diff --git a/lib/crypto/arm64/sha256-ce.S b/lib/crypto/arm64/sha256-ce.S index f3e21c6d87d2e..b99d9589c4217 100644 --- a/lib/crypto/arm64/sha256-ce.S +++ b/lib/crypto/arm64/sha256-ce.S @@ -69,11 +69,11 @@ .word 0x391c0cb3, 0x4ed8aa4a, 0x5b9cca4f, 0x682e6ff3 .word 0x748f82ee, 0x78a5636f, 0x84c87814, 0x8cc70208 .word 0x90befffa, 0xa4506ceb, 0xbef9a3f7, 0xc67178f2 =20 /* - * size_t __sha256_ce_transform(u32 state[SHA256_STATE_WORDS], + * size_t __sha256_ce_transform(struct sha256_block_state *state, * const u8 *data, size_t nblocks); */ .text SYM_FUNC_START(__sha256_ce_transform) /* load round constants */ diff --git a/lib/crypto/arm64/sha256.c b/lib/crypto/arm64/sha256.c index fb9bff40357be..609ffb8151987 100644 --- a/lib/crypto/arm64/sha256.c +++ b/lib/crypto/arm64/sha256.c @@ -8,21 +8,21 @@ #include #include #include #include =20 -asmlinkage void sha256_block_data_order(u32 state[SHA256_STATE_WORDS], +asmlinkage void sha256_block_data_order(struct sha256_block_state *state, const u8 *data, size_t nblocks); -asmlinkage void sha256_block_neon(u32 state[SHA256_STATE_WORDS], +asmlinkage void sha256_block_neon(struct sha256_block_state *state, const u8 *data, size_t nblocks); -asmlinkage size_t __sha256_ce_transform(u32 state[SHA256_STATE_WORDS], +asmlinkage size_t __sha256_ce_transform(struct sha256_block_state *state, const u8 *data, size_t nblocks); =20 static __ro_after_init DEFINE_STATIC_KEY_FALSE(have_neon); static __ro_after_init DEFINE_STATIC_KEY_FALSE(have_ce); =20 -void sha256_blocks_arch(u32 state[SHA256_STATE_WORDS], +void sha256_blocks_arch(struct sha256_block_state *state, const u8 *data, size_t nblocks) { if (IS_ENABLED(CONFIG_KERNEL_MODE_NEON) && static_branch_likely(&have_neon) && crypto_simd_usable()) { if (static_branch_likely(&have_ce)) { diff --git a/lib/crypto/powerpc/sha256.c b/lib/crypto/powerpc/sha256.c index 6b0f079587eb6..c3f844ae0aceb 100644 --- a/lib/crypto/powerpc/sha256.c +++ b/lib/crypto/powerpc/sha256.c @@ -40,11 +40,11 @@ static void spe_end(void) disable_kernel_spe(); /* reenable preemption */ preempt_enable(); } =20 -void sha256_blocks_arch(u32 state[SHA256_STATE_WORDS], +void sha256_blocks_arch(struct sha256_block_state *state, const u8 *data, size_t nblocks) { do { /* cut input data into smaller blocks */ u32 unit =3D min_t(size_t, nblocks, diff --git a/lib/crypto/riscv/sha256-riscv64-zvknha_or_zvknhb-zvkb.S b/lib/= crypto/riscv/sha256-riscv64-zvknha_or_zvknhb-zvkb.S index fad501ad06171..1618d1220a6e7 100644 --- a/lib/crypto/riscv/sha256-riscv64-zvknha_or_zvknhb-zvkb.S +++ b/lib/crypto/riscv/sha256-riscv64-zvknha_or_zvknhb-zvkb.S @@ -104,11 +104,11 @@ sha256_4rounds \last, \k1, W1, W2, W3, W0 sha256_4rounds \last, \k2, W2, W3, W0, W1 sha256_4rounds \last, \k3, W3, W0, W1, W2 .endm =20 -// void sha256_transform_zvknha_or_zvknhb_zvkb(u32 state[SHA256_STATE_WORD= S], +// void sha256_transform_zvknha_or_zvknhb_zvkb(struct sha256_block_state *= state, // const u8 *data, size_t nblocks); SYM_FUNC_START(sha256_transform_zvknha_or_zvknhb_zvkb) =20 // Load the round constants into K0-K15. vsetivli zero, 4, e32, m1, ta, ma diff --git a/lib/crypto/riscv/sha256.c b/lib/crypto/riscv/sha256.c index aa77349d08f30..a2079aa3ae925 100644 --- a/lib/crypto/riscv/sha256.c +++ b/lib/crypto/riscv/sha256.c @@ -13,16 +13,17 @@ #include #include #include #include =20 -asmlinkage void sha256_transform_zvknha_or_zvknhb_zvkb( - u32 state[SHA256_STATE_WORDS], const u8 *data, size_t nblocks); +asmlinkage void +sha256_transform_zvknha_or_zvknhb_zvkb(struct sha256_block_state *state, + const u8 *data, size_t nblocks); =20 static __ro_after_init DEFINE_STATIC_KEY_FALSE(have_extensions); =20 -void sha256_blocks_arch(u32 state[SHA256_STATE_WORDS], +void sha256_blocks_arch(struct sha256_block_state *state, const u8 *data, size_t nblocks) { if (static_branch_likely(&have_extensions) && crypto_simd_usable()) { kernel_vector_begin(); sha256_transform_zvknha_or_zvknhb_zvkb(state, data, nblocks); diff --git a/lib/crypto/s390/sha256.c b/lib/crypto/s390/sha256.c index 7dfe120fafaba..fb565718f7539 100644 --- a/lib/crypto/s390/sha256.c +++ b/lib/crypto/s390/sha256.c @@ -10,11 +10,11 @@ #include #include =20 static __ro_after_init DEFINE_STATIC_KEY_FALSE(have_cpacf_sha256); =20 -void sha256_blocks_arch(u32 state[SHA256_STATE_WORDS], +void sha256_blocks_arch(struct sha256_block_state *state, const u8 *data, size_t nblocks) { if (static_branch_likely(&have_cpacf_sha256)) cpacf_kimd(CPACF_KIMD_SHA_256, state, data, nblocks * SHA256_BLOCK_SIZE); diff --git a/lib/crypto/sha256-generic.c b/lib/crypto/sha256-generic.c index 2968d95d04038..99f904033c261 100644 --- a/lib/crypto/sha256-generic.c +++ b/lib/crypto/sha256-generic.c @@ -68,11 +68,11 @@ static inline void BLEND_OP(int I, u32 *W) t2 =3D e0(a) + Maj(a, b, c); \ d +=3D t1; \ h =3D t1 + t2; \ } while (0) =20 -static void sha256_block_generic(u32 state[SHA256_STATE_WORDS], +static void sha256_block_generic(struct sha256_block_state *state, const u8 *input, u32 W[64]) { u32 a, b, c, d, e, f, g, h; int i; =20 @@ -99,12 +99,18 @@ static void sha256_block_generic(u32 state[SHA256_STATE= _WORDS], BLEND_OP(i + 6, W); BLEND_OP(i + 7, W); } =20 /* load the state into our registers */ - a =3D state[0]; b =3D state[1]; c =3D state[2]; d =3D state[3]; - e =3D state[4]; f =3D state[5]; g =3D state[6]; h =3D state[7]; + a =3D state->h[0]; + b =3D state->h[1]; + c =3D state->h[2]; + d =3D state->h[3]; + e =3D state->h[4]; + f =3D state->h[5]; + g =3D state->h[6]; + h =3D state->h[7]; =20 /* now iterate */ for (i =3D 0; i < 64; i +=3D 8) { SHA256_ROUND(i + 0, a, b, c, d, e, f, g, h); SHA256_ROUND(i + 1, h, a, b, c, d, e, f, g); @@ -114,15 +120,21 @@ static void sha256_block_generic(u32 state[SHA256_STA= TE_WORDS], SHA256_ROUND(i + 5, d, e, f, g, h, a, b, c); SHA256_ROUND(i + 6, c, d, e, f, g, h, a, b); SHA256_ROUND(i + 7, b, c, d, e, f, g, h, a); } =20 - state[0] +=3D a; state[1] +=3D b; state[2] +=3D c; state[3] +=3D d; - state[4] +=3D e; state[5] +=3D f; state[6] +=3D g; state[7] +=3D h; + state->h[0] +=3D a; + state->h[1] +=3D b; + state->h[2] +=3D c; + state->h[3] +=3D d; + state->h[4] +=3D e; + state->h[5] +=3D f; + state->h[6] +=3D g; + state->h[7] +=3D h; } =20 -void sha256_blocks_generic(u32 state[SHA256_STATE_WORDS], +void sha256_blocks_generic(struct sha256_block_state *state, const u8 *data, size_t nblocks) { u32 W[64]; =20 do { diff --git a/lib/crypto/sparc/sha256.c b/lib/crypto/sparc/sha256.c index 8bdec2db08b30..060664b88a6d3 100644 --- a/lib/crypto/sparc/sha256.c +++ b/lib/crypto/sparc/sha256.c @@ -17,14 +17,14 @@ #include #include =20 static __ro_after_init DEFINE_STATIC_KEY_FALSE(have_sha256_opcodes); =20 -asmlinkage void sha256_sparc64_transform(u32 state[SHA256_STATE_WORDS], +asmlinkage void sha256_sparc64_transform(struct sha256_block_state *state, const u8 *data, size_t nblocks); =20 -void sha256_blocks_arch(u32 state[SHA256_STATE_WORDS], +void sha256_blocks_arch(struct sha256_block_state *state, const u8 *data, size_t nblocks) { if (static_branch_likely(&have_sha256_opcodes)) sha256_sparc64_transform(state, data, nblocks); else diff --git a/lib/crypto/x86/sha256-avx-asm.S b/lib/crypto/x86/sha256-avx-as= m.S index 0d7b2c3e45d9a..73bcff2b548f4 100644 --- a/lib/crypto/x86/sha256-avx-asm.S +++ b/lib/crypto/x86/sha256-avx-asm.S @@ -339,11 +339,11 @@ a =3D TMP_ add y0, h # h =3D h + S1 + CH + k + w + S0 += MAJ ROTATE_ARGS .endm =20 ######################################################################## -## void sha256_transform_avx(u32 state[SHA256_STATE_WORDS], +## void sha256_transform_avx(struct sha256_block_state *state, ## const u8 *data, size_t nblocks); ######################################################################## .text SYM_FUNC_START(sha256_transform_avx) ANNOTATE_NOENDBR # since this is called only via static_call diff --git a/lib/crypto/x86/sha256-avx2-asm.S b/lib/crypto/x86/sha256-avx2-= asm.S index 25d3380321ec3..45787570387f2 100644 --- a/lib/crypto/x86/sha256-avx2-asm.S +++ b/lib/crypto/x86/sha256-avx2-asm.S @@ -516,11 +516,11 @@ STACK_SIZE =3D _CTX + _CTX_SIZE ROTATE_ARGS =20 .endm =20 ######################################################################## -## void sha256_transform_rorx(u32 state[SHA256_STATE_WORDS], +## void sha256_transform_rorx(struct sha256_block_state *state, ## const u8 *data, size_t nblocks); ######################################################################## .text SYM_FUNC_START(sha256_transform_rorx) ANNOTATE_NOENDBR # since this is called only via static_call diff --git a/lib/crypto/x86/sha256-ni-asm.S b/lib/crypto/x86/sha256-ni-asm.S index d3548206cf3d4..4af7d22e29e47 100644 --- a/lib/crypto/x86/sha256-ni-asm.S +++ b/lib/crypto/x86/sha256-ni-asm.S @@ -104,11 +104,11 @@ * input data, and the number of 64-byte blocks to process. Once all bloc= ks * have been processed, the state is updated with the new state. This fun= ction * only processes complete blocks. State initialization, buffering of par= tial * blocks, and digest finalization is expected to be handled elsewhere. * - * void sha256_ni_transform(u32 state[SHA256_STATE_WORDS], + * void sha256_ni_transform(struct sha256_block_state *state, * const u8 *data, size_t nblocks); */ .text SYM_FUNC_START(sha256_ni_transform) ANNOTATE_NOENDBR # since this is called only via static_call diff --git a/lib/crypto/x86/sha256-ssse3-asm.S b/lib/crypto/x86/sha256-ssse= 3-asm.S index 7f24a4cdcb257..407b30adcd37f 100644 --- a/lib/crypto/x86/sha256-ssse3-asm.S +++ b/lib/crypto/x86/sha256-ssse3-asm.S @@ -346,11 +346,11 @@ a =3D TMP_ add y0, h # h =3D h + S1 + CH + k + w + S0 + MAJ ROTATE_ARGS .endm =20 ######################################################################## -## void sha256_transform_ssse3(u32 state[SHA256_STATE_WORDS], +## void sha256_transform_ssse3(struct sha256_block_state *state, ## const u8 *data, size_t nblocks); ######################################################################## .text SYM_FUNC_START(sha256_transform_ssse3) ANNOTATE_NOENDBR # since this is called only via static_call diff --git a/lib/crypto/x86/sha256.c b/lib/crypto/x86/sha256.c index baba74d7d26f2..cbb45defbefab 100644 --- a/lib/crypto/x86/sha256.c +++ b/lib/crypto/x86/sha256.c @@ -9,24 +9,24 @@ #include #include #include #include =20 -asmlinkage void sha256_transform_ssse3(u32 state[SHA256_STATE_WORDS], +asmlinkage void sha256_transform_ssse3(struct sha256_block_state *state, const u8 *data, size_t nblocks); -asmlinkage void sha256_transform_avx(u32 state[SHA256_STATE_WORDS], +asmlinkage void sha256_transform_avx(struct sha256_block_state *state, const u8 *data, size_t nblocks); -asmlinkage void sha256_transform_rorx(u32 state[SHA256_STATE_WORDS], +asmlinkage void sha256_transform_rorx(struct sha256_block_state *state, const u8 *data, size_t nblocks); -asmlinkage void sha256_ni_transform(u32 state[SHA256_STATE_WORDS], +asmlinkage void sha256_ni_transform(struct sha256_block_state *state, const u8 *data, size_t nblocks); =20 static __ro_after_init DEFINE_STATIC_KEY_FALSE(have_sha256_x86); =20 DEFINE_STATIC_CALL(sha256_blocks_x86, sha256_transform_ssse3); =20 -void sha256_blocks_arch(u32 state[SHA256_STATE_WORDS], +void sha256_blocks_arch(struct sha256_block_state *state, const u8 *data, size_t nblocks) { if (static_branch_likely(&have_sha256_x86) && crypto_simd_usable()) { kernel_fpu_begin(); static_call(sha256_blocks_x86)(state, data, nblocks); --=20 2.50.0 From nobody Sun Sep 7 11:31:36 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C8C6921D3DB; Wed, 25 Jun 2025 07:10:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750835433; cv=none; b=dkMaoXDBYsrGc1eSauJETUWc3GZz/HuX3G+dx2PrZx7dY/tCTrPLNuA5O2BV2Rr3p6lRnujIiKTs6V0ewQPTrJH/rjKXKF9jN+RspueXqKmcHIvl9s2hPUkngsVDx0Z5J/1SMZV66yAKToqaGWPq9HGSSKBjVKpyNLsUsxkJXhg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750835433; c=relaxed/simple; bh=HcT0g5xVAWv/N1IR5pEWuUbH8BmCfR6q+xHzYq1G6lk=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=eESqB0T/AHHh24nYroYkPcp2eQbudlKPl/DLI/+IHej7sfw2dbZwP1QTY+0zOjgl2oJNJ6rSRPk6Or9KuLKobK7M68tLry7N646Qee1SsqlbNZUNXi/9C8GMmWotfRMEw3TzYnReS0s6yA/+zGTsVLboWrKO6WWxX0HF5oLMz7M= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=bGgwTm2q; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="bGgwTm2q" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9681DC4CEF5; Wed, 25 Jun 2025 07:10:32 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1750835432; bh=HcT0g5xVAWv/N1IR5pEWuUbH8BmCfR6q+xHzYq1G6lk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=bGgwTm2qXC8YWw3rGgR+lHbPGOQQ/R4KGHs4yhXIoT3fqyhl19hbTUtM8dvXWJWpd JFMRJljmUlGkBNKGbQwrg2OrYrBbOmmkLbfT89ahvff9JlIDOJ/YdVkFmjjkU/zMRL N6xl6d93c+iiFynuJk60pUF1JidoqO9E+WChoY3Daba0yeF7mIFNFNGLvKocEfQlqh qA/Lv7qPWcCLjw4RnDU3lGzC39o9L6Za9uBu1lYk9hQfwK6Ts6xdFbPF30wSGdQgPR Bj/zByvKdS2wG1OPmCwaDByOfcnT3Hn8nMTz8VxHHnPMSflTbT1NLWINC0jTbgbIaA Q3Z5Jzbp+07Ew== From: Eric Biggers To: linux-crypto@vger.kernel.org Cc: linux-kernel@vger.kernel.org, Ard Biesheuvel , "Jason A . Donenfeld" , linux-arm-kernel@lists.infradead.org, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, sparclinux@vger.kernel.org, x86@kernel.org, Eric Biggers Subject: [PATCH 12/18] lib/crypto: sha256: Add HMAC-SHA224 and HMAC-SHA256 support Date: Wed, 25 Jun 2025 00:08:13 -0700 Message-ID: <20250625070819.1496119-13-ebiggers@kernel.org> X-Mailer: git-send-email 2.50.0 In-Reply-To: <20250625070819.1496119-1-ebiggers@kernel.org> References: <20250625070819.1496119-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Since HMAC support is commonly needed and is fairly simple, include it as a first-class citizen of the SHA-256 library. The API supports both incremental and one-shot computation, and either preparing the key ahead of time or just using a raw key. The implementation is much more streamlined than crypto/hmac.c. I've kept it consistent with the HMAC-SHA384 and HMAC-SHA512 code as much as possible. Testing of these functions will be via sha224_kunit and sha256_kunit, added by a later commit. Signed-off-by: Eric Biggers --- include/crypto/sha2.h | 222 ++++++++++++++++++++++++++++++++++++++++++ lib/crypto/sha256.c | 145 +++++++++++++++++++++++++-- 2 files changed, 361 insertions(+), 6 deletions(-) diff --git a/include/crypto/sha2.h b/include/crypto/sha2.h index 18e1eec841b71..2e3fc2cf4aa0d 100644 --- a/include/crypto/sha2.h +++ b/include/crypto/sha2.h @@ -129,10 +129,26 @@ struct __sha256_ctx { u64 bytecount; u8 buf[SHA256_BLOCK_SIZE] __aligned(__alignof__(__be64)); }; void __sha256_update(struct __sha256_ctx *ctx, const u8 *data, size_t len); =20 +/* + * HMAC key and message context structs, shared by HMAC-SHA224 and HMAC-SH= A256. + * The hmac_sha224_* and hmac_sha256_* structs wrap this one so that the A= PI has + * proper typing and doesn't allow mixing the functions arbitrarily. + */ +struct __hmac_sha256_key { + struct sha256_block_state istate; + struct sha256_block_state ostate; +}; +struct __hmac_sha256_ctx { + struct __sha256_ctx sha_ctx; + struct sha256_block_state ostate; +}; +void __hmac_sha256_init(struct __hmac_sha256_ctx *ctx, + const struct __hmac_sha256_key *key); + /** * struct sha224_ctx - Context for hashing a message with SHA-224 * @ctx: private */ struct sha224_ctx { @@ -146,10 +162,113 @@ static inline void sha224_update(struct sha224_ctx *= ctx, __sha256_update(&ctx->ctx, data, len); } void sha224_final(struct sha224_ctx *ctx, u8 out[SHA224_DIGEST_SIZE]); void sha224(const u8 *data, size_t len, u8 out[SHA224_DIGEST_SIZE]); =20 +/** + * struct hmac_sha224_key - Prepared key for HMAC-SHA224 + * @key: private + */ +struct hmac_sha224_key { + struct __hmac_sha256_key key; +}; + +/** + * struct hmac_sha224_ctx - Context for computing HMAC-SHA224 of a message + * @ctx: private + */ +struct hmac_sha224_ctx { + struct __hmac_sha256_ctx ctx; +}; + +/** + * hmac_sha224_preparekey() - Prepare a key for HMAC-SHA224 + * @key: (output) the key structure to initialize + * @raw_key: the raw HMAC-SHA224 key + * @raw_key_len: the key length in bytes. All key lengths are supported. + * + * Note: the caller is responsible for zeroizing both the struct hmac_sha2= 24_key + * and the raw key once they are no longer needed. + * + * Context: Any context. + */ +void hmac_sha224_preparekey(struct hmac_sha224_key *key, + const u8 *raw_key, size_t raw_key_len); + +/** + * hmac_sha224_init() - Initialize an HMAC-SHA224 context for a new message + * @ctx: (output) the HMAC context to initialize + * @key: the prepared HMAC key + * + * If you don't need incremental computation, consider hmac_sha224() inste= ad. + * + * Context: Any context. + */ +static inline void hmac_sha224_init(struct hmac_sha224_ctx *ctx, + const struct hmac_sha224_key *key) +{ + __hmac_sha256_init(&ctx->ctx, &key->key); +} + +/** + * hmac_sha224_update() - Update an HMAC-SHA224 context with message data + * @ctx: the HMAC context to update; must have been initialized + * @data: the message data + * @data_len: the data length in bytes + * + * This can be called any number of times. + * + * Context: Any context. + */ +static inline void hmac_sha224_update(struct hmac_sha224_ctx *ctx, + const u8 *data, size_t data_len) +{ + __sha256_update(&ctx->ctx.sha_ctx, data, data_len); +} + +/** + * hmac_sha224_final() - Finish computing an HMAC-SHA224 value + * @ctx: the HMAC context to finalize; must have been initialized + * @out: (output) the resulting HMAC-SHA224 value + * + * After finishing, this zeroizes @ctx. So the caller does not need to do= it. + * + * Context: Any context. + */ +void hmac_sha224_final(struct hmac_sha224_ctx *ctx, u8 out[SHA224_DIGEST_S= IZE]); + +/** + * hmac_sha224() - Compute HMAC-SHA224 in one shot, using a prepared key + * @key: the prepared HMAC key + * @data: the message data + * @data_len: the data length in bytes + * @out: (output) the resulting HMAC-SHA224 value + * + * If you're using the key only once, consider using hmac_sha224_usingrawk= ey(). + * + * Context: Any context. + */ +void hmac_sha224(const struct hmac_sha224_key *key, + const u8 *data, size_t data_len, u8 out[SHA224_DIGEST_SIZE]); + +/** + * hmac_sha224_usingrawkey() - Compute HMAC-SHA224 in one shot, using a ra= w key + * @raw_key: the raw HMAC-SHA224 key + * @raw_key_len: the key length in bytes. All key lengths are supported. + * @data: the message data + * @data_len: the data length in bytes + * @out: (output) the resulting HMAC-SHA224 value + * + * If you're using the key multiple times, prefer to use + * hmac_sha224_preparekey() followed by multiple calls to hmac_sha224() in= stead. + * + * Context: Any context. + */ +void hmac_sha224_usingrawkey(const u8 *raw_key, size_t raw_key_len, + const u8 *data, size_t data_len, + u8 out[SHA224_DIGEST_SIZE]); + /** * struct sha256_ctx - Context for hashing a message with SHA-256 * @ctx: private */ struct sha256_ctx { @@ -163,10 +282,113 @@ static inline void sha256_update(struct sha256_ctx *= ctx, __sha256_update(&ctx->ctx, data, len); } void sha256_final(struct sha256_ctx *ctx, u8 out[SHA256_DIGEST_SIZE]); void sha256(const u8 *data, size_t len, u8 out[SHA256_DIGEST_SIZE]); =20 +/** + * struct hmac_sha256_key - Prepared key for HMAC-SHA256 + * @key: private + */ +struct hmac_sha256_key { + struct __hmac_sha256_key key; +}; + +/** + * struct hmac_sha256_ctx - Context for computing HMAC-SHA256 of a message + * @ctx: private + */ +struct hmac_sha256_ctx { + struct __hmac_sha256_ctx ctx; +}; + +/** + * hmac_sha256_preparekey() - Prepare a key for HMAC-SHA256 + * @key: (output) the key structure to initialize + * @raw_key: the raw HMAC-SHA256 key + * @raw_key_len: the key length in bytes. All key lengths are supported. + * + * Note: the caller is responsible for zeroizing both the struct hmac_sha2= 56_key + * and the raw key once they are no longer needed. + * + * Context: Any context. + */ +void hmac_sha256_preparekey(struct hmac_sha256_key *key, + const u8 *raw_key, size_t raw_key_len); + +/** + * hmac_sha256_init() - Initialize an HMAC-SHA256 context for a new message + * @ctx: (output) the HMAC context to initialize + * @key: the prepared HMAC key + * + * If you don't need incremental computation, consider hmac_sha256() inste= ad. + * + * Context: Any context. + */ +static inline void hmac_sha256_init(struct hmac_sha256_ctx *ctx, + const struct hmac_sha256_key *key) +{ + __hmac_sha256_init(&ctx->ctx, &key->key); +} + +/** + * hmac_sha256_update() - Update an HMAC-SHA256 context with message data + * @ctx: the HMAC context to update; must have been initialized + * @data: the message data + * @data_len: the data length in bytes + * + * This can be called any number of times. + * + * Context: Any context. + */ +static inline void hmac_sha256_update(struct hmac_sha256_ctx *ctx, + const u8 *data, size_t data_len) +{ + __sha256_update(&ctx->ctx.sha_ctx, data, data_len); +} + +/** + * hmac_sha256_final() - Finish computing an HMAC-SHA256 value + * @ctx: the HMAC context to finalize; must have been initialized + * @out: (output) the resulting HMAC-SHA256 value + * + * After finishing, this zeroizes @ctx. So the caller does not need to do= it. + * + * Context: Any context. + */ +void hmac_sha256_final(struct hmac_sha256_ctx *ctx, u8 out[SHA256_DIGEST_S= IZE]); + +/** + * hmac_sha256() - Compute HMAC-SHA256 in one shot, using a prepared key + * @key: the prepared HMAC key + * @data: the message data + * @data_len: the data length in bytes + * @out: (output) the resulting HMAC-SHA256 value + * + * If you're using the key only once, consider using hmac_sha256_usingrawk= ey(). + * + * Context: Any context. + */ +void hmac_sha256(const struct hmac_sha256_key *key, + const u8 *data, size_t data_len, u8 out[SHA256_DIGEST_SIZE]); + +/** + * hmac_sha256_usingrawkey() - Compute HMAC-SHA256 in one shot, using a ra= w key + * @raw_key: the raw HMAC-SHA256 key + * @raw_key_len: the key length in bytes. All key lengths are supported. + * @data: the message data + * @data_len: the data length in bytes + * @out: (output) the resulting HMAC-SHA256 value + * + * If you're using the key multiple times, prefer to use + * hmac_sha256_preparekey() followed by multiple calls to hmac_sha256() in= stead. + * + * Context: Any context. + */ +void hmac_sha256_usingrawkey(const u8 *raw_key, size_t raw_key_len, + const u8 *data, size_t data_len, + u8 out[SHA256_DIGEST_SIZE]); + /* State for the SHA-512 (and SHA-384) compression function */ struct sha512_block_state { u64 h[8]; }; =20 diff --git a/lib/crypto/sha256.c b/lib/crypto/sha256.c index 3e7797a4489de..165c894a47aa0 100644 --- a/lib/crypto/sha256.c +++ b/lib/crypto/sha256.c @@ -1,24 +1,22 @@ // SPDX-License-Identifier: GPL-2.0-or-later /* - * SHA-256, as specified in - * http://csrc.nist.gov/groups/STM/cavp/documents/shs/sha256-384-512.pdf - * - * SHA-256 code by Jean-Luc Cooke . + * SHA-224, SHA-256, HMAC-SHA224, and HMAC-SHA256 library functions * * Copyright (c) Jean-Luc Cooke * Copyright (c) Andrew McDonald * Copyright (c) 2002 James Morris * Copyright (c) 2014 Red Hat Inc. */ =20 +#include #include #include #include #include #include -#include +#include =20 static const struct sha256_block_state sha224_iv =3D { .h =3D { SHA224_H0, SHA224_H1, SHA224_H2, SHA224_H3, SHA224_H4, SHA224_H5, SHA224_H6, SHA224_H7, @@ -134,7 +132,142 @@ void sha256(const u8 *data, size_t len, u8 out[SHA256= _DIGEST_SIZE]) sha256_update(&ctx, data, len); sha256_final(&ctx, out); } EXPORT_SYMBOL(sha256); =20 -MODULE_DESCRIPTION("SHA-256 Algorithm"); +static void __hmac_sha256_preparekey(struct __hmac_sha256_key *key, + const u8 *raw_key, size_t raw_key_len, + const struct sha256_block_state *iv) +{ + union { + u8 b[SHA256_BLOCK_SIZE]; + unsigned long w[SHA256_BLOCK_SIZE / sizeof(unsigned long)]; + } derived_key =3D { 0 }; + + if (unlikely(raw_key_len > SHA256_BLOCK_SIZE)) { + if (iv =3D=3D &sha224_iv) + sha224(raw_key, raw_key_len, derived_key.b); + else + sha256(raw_key, raw_key_len, derived_key.b); + } else { + memcpy(derived_key.b, raw_key, raw_key_len); + } + + for (size_t i =3D 0; i < ARRAY_SIZE(derived_key.w); i++) + derived_key.w[i] ^=3D REPEAT_BYTE(HMAC_IPAD_VALUE); + key->istate =3D *iv; + sha256_blocks(&key->istate, derived_key.b, 1); + + for (size_t i =3D 0; i < ARRAY_SIZE(derived_key.w); i++) + derived_key.w[i] ^=3D REPEAT_BYTE(HMAC_OPAD_VALUE ^ + HMAC_IPAD_VALUE); + key->ostate =3D *iv; + sha256_blocks(&key->ostate, derived_key.b, 1); + + memzero_explicit(&derived_key, sizeof(derived_key)); +} + +void hmac_sha224_preparekey(struct hmac_sha224_key *key, + const u8 *raw_key, size_t raw_key_len) +{ + __hmac_sha256_preparekey(&key->key, raw_key, raw_key_len, &sha224_iv); +} +EXPORT_SYMBOL_GPL(hmac_sha224_preparekey); + +void hmac_sha256_preparekey(struct hmac_sha256_key *key, + const u8 *raw_key, size_t raw_key_len) +{ + __hmac_sha256_preparekey(&key->key, raw_key, raw_key_len, &sha256_iv); +} +EXPORT_SYMBOL_GPL(hmac_sha256_preparekey); + +void __hmac_sha256_init(struct __hmac_sha256_ctx *ctx, + const struct __hmac_sha256_key *key) +{ + __sha256_init(&ctx->sha_ctx, &key->istate, SHA256_BLOCK_SIZE); + ctx->ostate =3D key->ostate; +} +EXPORT_SYMBOL_GPL(__hmac_sha256_init); + +static void __hmac_sha256_final(struct __hmac_sha256_ctx *ctx, + u8 *out, size_t digest_size) +{ + /* Generate the padded input for the outer hash in ctx->sha_ctx.buf. */ + __sha256_final(&ctx->sha_ctx, ctx->sha_ctx.buf, digest_size); + memset(&ctx->sha_ctx.buf[digest_size], 0, + SHA256_BLOCK_SIZE - digest_size); + ctx->sha_ctx.buf[digest_size] =3D 0x80; + *(__be32 *)&ctx->sha_ctx.buf[SHA256_BLOCK_SIZE - 4] =3D + cpu_to_be32(8 * (SHA256_BLOCK_SIZE + digest_size)); + + /* Compute the outer hash, which gives the HMAC value. */ + sha256_blocks(&ctx->ostate, ctx->sha_ctx.buf, 1); + for (size_t i =3D 0; i < digest_size; i +=3D 4) + put_unaligned_be32(ctx->ostate.h[i / 4], out + i); + + memzero_explicit(ctx, sizeof(*ctx)); +} + +void hmac_sha224_final(struct hmac_sha224_ctx *ctx, + u8 out[SHA224_DIGEST_SIZE]) +{ + __hmac_sha256_final(&ctx->ctx, out, SHA224_DIGEST_SIZE); +} +EXPORT_SYMBOL_GPL(hmac_sha224_final); + +void hmac_sha256_final(struct hmac_sha256_ctx *ctx, + u8 out[SHA256_DIGEST_SIZE]) +{ + __hmac_sha256_final(&ctx->ctx, out, SHA256_DIGEST_SIZE); +} +EXPORT_SYMBOL_GPL(hmac_sha256_final); + +void hmac_sha224(const struct hmac_sha224_key *key, + const u8 *data, size_t data_len, u8 out[SHA224_DIGEST_SIZE]) +{ + struct hmac_sha224_ctx ctx; + + hmac_sha224_init(&ctx, key); + hmac_sha224_update(&ctx, data, data_len); + hmac_sha224_final(&ctx, out); +} +EXPORT_SYMBOL_GPL(hmac_sha224); + +void hmac_sha256(const struct hmac_sha256_key *key, + const u8 *data, size_t data_len, u8 out[SHA256_DIGEST_SIZE]) +{ + struct hmac_sha256_ctx ctx; + + hmac_sha256_init(&ctx, key); + hmac_sha256_update(&ctx, data, data_len); + hmac_sha256_final(&ctx, out); +} +EXPORT_SYMBOL_GPL(hmac_sha256); + +void hmac_sha224_usingrawkey(const u8 *raw_key, size_t raw_key_len, + const u8 *data, size_t data_len, + u8 out[SHA224_DIGEST_SIZE]) +{ + struct hmac_sha224_key key; + + hmac_sha224_preparekey(&key, raw_key, raw_key_len); + hmac_sha224(&key, data, data_len, out); + + memzero_explicit(&key, sizeof(key)); +} +EXPORT_SYMBOL_GPL(hmac_sha224_usingrawkey); + +void hmac_sha256_usingrawkey(const u8 *raw_key, size_t raw_key_len, + const u8 *data, size_t data_len, + u8 out[SHA256_DIGEST_SIZE]) +{ + struct hmac_sha256_key key; + + hmac_sha256_preparekey(&key, raw_key, raw_key_len); + hmac_sha256(&key, data, data_len, out); + + memzero_explicit(&key, sizeof(key)); +} +EXPORT_SYMBOL_GPL(hmac_sha256_usingrawkey); + +MODULE_DESCRIPTION("SHA-224, SHA-256, HMAC-SHA224, and HMAC-SHA256 library= functions"); MODULE_LICENSE("GPL"); --=20 2.50.0 From nobody Sun Sep 7 11:31:36 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C8CEF21D3E6; Wed, 25 Jun 2025 07:10:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750835433; cv=none; b=Z7SRrB5RastXUzJ25e5K8qxSYA4Wq8TjCYz4SOcQ1oC42GouOXIeaSDYPylIZdzVa5LqTRBOVArGTZZN6UUA0J+jvSvsRSpi4lfEO61zMDpGT9pbwaqHR7Gy1GAYJ5vEhIvmdm7L+07gOtcBo2X62gP66YJWTodPgrMdLYEIrec= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750835433; c=relaxed/simple; bh=sCHERaTxMMEUVGvqsW2R5+cc6BxvVMNaraj7BT1Upjk=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=taujB3ShXnZ6E/UzJiHRSVeI3/FKhtB24JSzEIDE6lA0sNLolqyIr1tIsUbHGE8Non2dEfkQkfV9PrBHvuvSDzhh8W32nQ6btjbGzEcJvtesxmufzNVQtNp7jmcEcHatyilUT9dfNKYV1FBAYIiRSRPtN730QR3peP4EigPCuZ8= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=ip1C0Zuj; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="ip1C0Zuj" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 18A8AC4CEF2; Wed, 25 Jun 2025 07:10:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1750835433; bh=sCHERaTxMMEUVGvqsW2R5+cc6BxvVMNaraj7BT1Upjk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ip1C0Zujy+3rYPyuUJNLV/DeMRYonEAZjpTSUGpf/ukCS/amd/zb+GEamwZaastwU vHJsWkZ1ZhxfyNNtmdTe5T6OwDX1/qE6Wbm7NIcBPZYvdEmJbS3FkwTOPIiTeZNpv7 VaoMfroBttxuMOghaObIaUSOP+u/w25NQCFWUjrJ6k709y/RwzPjmuJdt3nUKk1fln ChqhJn3t9V2tnfXBfEenoN/GCh1Z7OhOwx1NFNAS+KV4GxvlfDZVMKwm3DjTjqb9ST MMah4mWSbtlkSJBqBjMlwPvAPinBUmf89F7XRUkvIe3nda1x2voNwvM1IQPgAjv1mM yxBt9X6CQBzgg== From: Eric Biggers To: linux-crypto@vger.kernel.org Cc: linux-kernel@vger.kernel.org, Ard Biesheuvel , "Jason A . Donenfeld" , linux-arm-kernel@lists.infradead.org, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, sparclinux@vger.kernel.org, x86@kernel.org, Eric Biggers Subject: [PATCH 13/18] crypto: sha256 - Wrap library and add HMAC support Date: Wed, 25 Jun 2025 00:08:14 -0700 Message-ID: <20250625070819.1496119-14-ebiggers@kernel.org> X-Mailer: git-send-email 2.50.0 In-Reply-To: <20250625070819.1496119-1-ebiggers@kernel.org> References: <20250625070819.1496119-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Like I did for crypto/sha512.c, rework crypto/sha256.c to simply wrap the normal library functions instead of accessing the low-level arch- optimized and generic block functions directly. Also add support for HMAC-SHA224 and HMAC-SHA256, again just wrapping the library functions. Since the replacement crypto_shash algorithms are implemented using the (potentially arch-optimized) library functions, give them driver names ending with "-lib" rather than "-generic". Update crypto/testmgr.c and a couple odd drivers to take this change in driver name into account. Besides the above cases which are accounted for, there are no known cases where the driver names were being depended on. There is potential for confusion for people manually checking /proc/crypto (e.g. https://lore.kernel.org/r/9e33c893-2466-4d4e-afb1-966334e451a2@linux.ibm.co= m/), but really people just need to get used to the driver name not being meaningful for the software algorithms. Historically, the optimized code was disabled by default, so there was some purpose to checking whether it was enabled or not. However, this is now fixed for all SHA-2 algorithms, and the library code just always does the right thing. E.g. if the CPU supports SHA-256 instructions, they are used. This change does also mean that the generic partial block handling code in crypto/shash.c, which got added in 6.16, no longer gets used. But that's fine; the library has to implement the partial block handling anyway, and it's better to do it in the library since the block size and other properties of the algorithm are all fixed at compile time there, resulting in more streamlined code. Signed-off-by: Eric Biggers --- crypto/Kconfig | 4 +- crypto/sha256.c | 286 ++++++++++++-------------- crypto/testmgr.c | 12 ++ drivers/crypto/img-hash.c | 4 +- drivers/crypto/starfive/jh7110-hash.c | 8 +- 5 files changed, 148 insertions(+), 166 deletions(-) diff --git a/crypto/Kconfig b/crypto/Kconfig index cb40a9b469722..3ea1397214e02 100644 --- a/crypto/Kconfig +++ b/crypto/Kconfig @@ -990,13 +990,13 @@ config CRYPTO_SHA1 =20 config CRYPTO_SHA256 tristate "SHA-224 and SHA-256" select CRYPTO_HASH select CRYPTO_LIB_SHA256 - select CRYPTO_LIB_SHA256_GENERIC help - SHA-224 and SHA-256 secure hash algorithms (FIPS 180, ISO/IEC 10118-3) + SHA-224 and SHA-256 secure hash algorithms (FIPS 180, ISO/IEC + 10118-3), including HMAC support. =20 This is required for IPsec AH (XFRM_AH) and IPsec ESP (XFRM_ESP). Used by the btrfs filesystem, Ceph, NFS, and SMB. =20 config CRYPTO_SHA512 diff --git a/crypto/sha256.c b/crypto/sha256.c index 15c57fba256b7..d81166cbba953 100644 --- a/crypto/sha256.c +++ b/crypto/sha256.c @@ -1,283 +1,253 @@ // SPDX-License-Identifier: GPL-2.0-or-later /* - * Crypto API wrapper for the SHA-256 and SHA-224 library functions + * Crypto API support for SHA-224, SHA-256, HMAC-SHA224, and HMAC-SHA256 * * Copyright (c) Jean-Luc Cooke * Copyright (c) Andrew McDonald * Copyright (c) 2002 James Morris * SHA224 Support Copyright 2007 Intel Corporation + * Copyright 2025 Google LLC */ #include -#include +#include #include #include =20 +/* SHA-224 */ + const u8 sha224_zero_message_hash[SHA224_DIGEST_SIZE] =3D { 0xd1, 0x4a, 0x02, 0x8c, 0x2a, 0x3a, 0x2b, 0xc9, 0x47, 0x61, 0x02, 0xbb, 0x28, 0x82, 0x34, 0xc4, 0x15, 0xa2, 0xb0, 0x1f, 0x82, 0x8e, 0xa6, 0x2a, 0xc5, 0xb3, 0xe4, 0x2f }; EXPORT_SYMBOL_GPL(sha224_zero_message_hash); =20 +#define SHA224_CTX(desc) ((struct sha224_ctx *)shash_desc_ctx(desc)) + +static int crypto_sha224_init(struct shash_desc *desc) +{ + sha224_init(SHA224_CTX(desc)); + return 0; +} + +static int crypto_sha224_update(struct shash_desc *desc, + const u8 *data, unsigned int len) +{ + sha224_update(SHA224_CTX(desc), data, len); + return 0; +} + +static int crypto_sha224_final(struct shash_desc *desc, u8 *out) +{ + sha224_final(SHA224_CTX(desc), out); + return 0; +} + +static int crypto_sha224_digest(struct shash_desc *desc, + const u8 *data, unsigned int len, u8 *out) +{ + sha224(data, len, out); + return 0; +} + +/* SHA-256 */ + const u8 sha256_zero_message_hash[SHA256_DIGEST_SIZE] =3D { 0xe3, 0xb0, 0xc4, 0x42, 0x98, 0xfc, 0x1c, 0x14, 0x9a, 0xfb, 0xf4, 0xc8, 0x99, 0x6f, 0xb9, 0x24, 0x27, 0xae, 0x41, 0xe4, 0x64, 0x9b, 0x93, 0x4c, 0xa4, 0x95, 0x99, 0x1b, 0x78, 0x52, 0xb8, 0x55 }; EXPORT_SYMBOL_GPL(sha256_zero_message_hash); =20 +#define SHA256_CTX(desc) ((struct sha256_ctx *)shash_desc_ctx(desc)) + static int crypto_sha256_init(struct shash_desc *desc) { - sha256_block_init(shash_desc_ctx(desc)); + sha256_init(SHA256_CTX(desc)); return 0; } =20 -static inline int crypto_sha256_update(struct shash_desc *desc, const u8 *= data, - unsigned int len, bool force_generic) +static int crypto_sha256_update(struct shash_desc *desc, + const u8 *data, unsigned int len) { - struct crypto_sha256_state *sctx =3D shash_desc_ctx(desc); - int remain =3D len % SHA256_BLOCK_SIZE; - - sctx->count +=3D len - remain; - sha256_choose_blocks(sctx->state, data, len / SHA256_BLOCK_SIZE, - force_generic, !force_generic); - return remain; + sha256_update(SHA256_CTX(desc), data, len); + return 0; } =20 -static int crypto_sha256_update_generic(struct shash_desc *desc, const u8 = *data, - unsigned int len) +static int crypto_sha256_final(struct shash_desc *desc, u8 *out) { - return crypto_sha256_update(desc, data, len, true); + sha256_final(SHA256_CTX(desc), out); + return 0; } =20 -static int crypto_sha256_update_lib(struct shash_desc *desc, const u8 *dat= a, - unsigned int len) +static int crypto_sha256_digest(struct shash_desc *desc, + const u8 *data, unsigned int len, u8 *out) { - sha256_update(shash_desc_ctx(desc), data, len); + sha256(data, len, out); return 0; } =20 -static int crypto_sha256_update_arch(struct shash_desc *desc, const u8 *da= ta, - unsigned int len) -{ - return crypto_sha256_update(desc, data, len, false); -} +/* HMAC-SHA224 */ =20 -static int crypto_sha256_final_lib(struct shash_desc *desc, u8 *out) -{ - sha256_final(shash_desc_ctx(desc), out); - return 0; -} +#define HMAC_SHA224_KEY(tfm) ((struct hmac_sha224_key *)crypto_shash_ctx(t= fm)) +#define HMAC_SHA224_CTX(desc) ((struct hmac_sha224_ctx *)shash_desc_ctx(de= sc)) =20 -static __always_inline int crypto_sha256_finup(struct shash_desc *desc, - const u8 *data, - unsigned int len, u8 *out, - bool force_generic) +static int crypto_hmac_sha224_setkey(struct crypto_shash *tfm, + const u8 *raw_key, unsigned int keylen) { - struct crypto_sha256_state *sctx =3D shash_desc_ctx(desc); - unsigned int remain =3D len; - u8 *buf; - - if (len >=3D SHA256_BLOCK_SIZE) - remain =3D crypto_sha256_update(desc, data, len, force_generic); - sctx->count +=3D remain; - buf =3D memcpy(sctx + 1, data + len - remain, remain); - sha256_finup(sctx, buf, remain, out, - crypto_shash_digestsize(desc->tfm), force_generic, - !force_generic); + hmac_sha224_preparekey(HMAC_SHA224_KEY(tfm), raw_key, keylen); return 0; } =20 -static int crypto_sha256_finup_generic(struct shash_desc *desc, const u8 *= data, - unsigned int len, u8 *out) +static int crypto_hmac_sha224_init(struct shash_desc *desc) { - return crypto_sha256_finup(desc, data, len, out, true); + hmac_sha224_init(HMAC_SHA224_CTX(desc), HMAC_SHA224_KEY(desc->tfm)); + return 0; } =20 -static int crypto_sha256_finup_arch(struct shash_desc *desc, const u8 *dat= a, - unsigned int len, u8 *out) +static int crypto_hmac_sha224_update(struct shash_desc *desc, + const u8 *data, unsigned int len) { - return crypto_sha256_finup(desc, data, len, out, false); + hmac_sha224_update(HMAC_SHA224_CTX(desc), data, len); + return 0; } =20 -static int crypto_sha256_digest_generic(struct shash_desc *desc, const u8 = *data, - unsigned int len, u8 *out) +static int crypto_hmac_sha224_final(struct shash_desc *desc, u8 *out) { - crypto_sha256_init(desc); - return crypto_sha256_finup_generic(desc, data, len, out); + hmac_sha224_final(HMAC_SHA224_CTX(desc), out); + return 0; } =20 -static int crypto_sha256_digest_lib(struct shash_desc *desc, const u8 *dat= a, - unsigned int len, u8 *out) +static int crypto_hmac_sha224_digest(struct shash_desc *desc, + const u8 *data, unsigned int len, + u8 *out) { - sha256(data, len, out); + hmac_sha224(HMAC_SHA224_KEY(desc->tfm), data, len, out); return 0; } =20 -static int crypto_sha256_digest_arch(struct shash_desc *desc, const u8 *da= ta, - unsigned int len, u8 *out) +/* HMAC-SHA256 */ + +#define HMAC_SHA256_KEY(tfm) ((struct hmac_sha256_key *)crypto_shash_ctx(t= fm)) +#define HMAC_SHA256_CTX(desc) ((struct hmac_sha256_ctx *)shash_desc_ctx(de= sc)) + +static int crypto_hmac_sha256_setkey(struct crypto_shash *tfm, + const u8 *raw_key, unsigned int keylen) { - crypto_sha256_init(desc); - return crypto_sha256_finup_arch(desc, data, len, out); + hmac_sha256_preparekey(HMAC_SHA256_KEY(tfm), raw_key, keylen); + return 0; } =20 -static int crypto_sha224_init(struct shash_desc *desc) +static int crypto_hmac_sha256_init(struct shash_desc *desc) { - sha224_block_init(shash_desc_ctx(desc)); + hmac_sha256_init(HMAC_SHA256_CTX(desc), HMAC_SHA256_KEY(desc->tfm)); return 0; } =20 -static int crypto_sha224_final_lib(struct shash_desc *desc, u8 *out) +static int crypto_hmac_sha256_update(struct shash_desc *desc, + const u8 *data, unsigned int len) { - sha224_final(shash_desc_ctx(desc), out); + hmac_sha256_update(HMAC_SHA256_CTX(desc), data, len); return 0; } =20 -static int crypto_sha256_import_lib(struct shash_desc *desc, const void *i= n) +static int crypto_hmac_sha256_final(struct shash_desc *desc, u8 *out) { - struct __sha256_ctx *sctx =3D shash_desc_ctx(desc); - const u8 *p =3D in; - - memcpy(sctx, p, sizeof(*sctx)); - p +=3D sizeof(*sctx); - sctx->bytecount +=3D *p; + hmac_sha256_final(HMAC_SHA256_CTX(desc), out); return 0; } =20 -static int crypto_sha256_export_lib(struct shash_desc *desc, void *out) +static int crypto_hmac_sha256_digest(struct shash_desc *desc, + const u8 *data, unsigned int len, + u8 *out) { - struct __sha256_ctx *sctx0 =3D shash_desc_ctx(desc); - struct __sha256_ctx sctx =3D *sctx0; - unsigned int partial; - u8 *p =3D out; - - partial =3D sctx.bytecount % SHA256_BLOCK_SIZE; - sctx.bytecount -=3D partial; - memcpy(p, &sctx, sizeof(sctx)); - p +=3D sizeof(sctx); - *p =3D partial; + hmac_sha256(HMAC_SHA256_KEY(desc->tfm), data, len, out); return 0; } =20 +/* Algorithm definitions */ + static struct shash_alg algs[] =3D { - { - .base.cra_name =3D "sha256", - .base.cra_driver_name =3D "sha256-generic", - .base.cra_priority =3D 100, - .base.cra_flags =3D CRYPTO_AHASH_ALG_BLOCK_ONLY | - CRYPTO_AHASH_ALG_FINUP_MAX, - .base.cra_blocksize =3D SHA256_BLOCK_SIZE, - .base.cra_module =3D THIS_MODULE, - .digestsize =3D SHA256_DIGEST_SIZE, - .init =3D crypto_sha256_init, - .update =3D crypto_sha256_update_generic, - .finup =3D crypto_sha256_finup_generic, - .digest =3D crypto_sha256_digest_generic, - .descsize =3D sizeof(struct crypto_sha256_state), - }, { .base.cra_name =3D "sha224", - .base.cra_driver_name =3D "sha224-generic", - .base.cra_priority =3D 100, - .base.cra_flags =3D CRYPTO_AHASH_ALG_BLOCK_ONLY | - CRYPTO_AHASH_ALG_FINUP_MAX, + .base.cra_driver_name =3D "sha224-lib", + .base.cra_priority =3D 300, .base.cra_blocksize =3D SHA224_BLOCK_SIZE, .base.cra_module =3D THIS_MODULE, .digestsize =3D SHA224_DIGEST_SIZE, .init =3D crypto_sha224_init, - .update =3D crypto_sha256_update_generic, - .finup =3D crypto_sha256_finup_generic, - .descsize =3D sizeof(struct crypto_sha256_state), + .update =3D crypto_sha224_update, + .final =3D crypto_sha224_final, + .digest =3D crypto_sha224_digest, + .descsize =3D sizeof(struct sha224_ctx), }, { .base.cra_name =3D "sha256", .base.cra_driver_name =3D "sha256-lib", + .base.cra_priority =3D 300, .base.cra_blocksize =3D SHA256_BLOCK_SIZE, .base.cra_module =3D THIS_MODULE, .digestsize =3D SHA256_DIGEST_SIZE, .init =3D crypto_sha256_init, - .update =3D crypto_sha256_update_lib, - .final =3D crypto_sha256_final_lib, - .digest =3D crypto_sha256_digest_lib, + .update =3D crypto_sha256_update, + .final =3D crypto_sha256_final, + .digest =3D crypto_sha256_digest, .descsize =3D sizeof(struct sha256_ctx), - .statesize =3D sizeof(struct crypto_sha256_state) + - SHA256_BLOCK_SIZE + 1, - .import =3D crypto_sha256_import_lib, - .export =3D crypto_sha256_export_lib, }, { - .base.cra_name =3D "sha224", - .base.cra_driver_name =3D "sha224-lib", + .base.cra_name =3D "hmac(sha224)", + .base.cra_driver_name =3D "hmac-sha224-lib", + .base.cra_priority =3D 300, .base.cra_blocksize =3D SHA224_BLOCK_SIZE, + .base.cra_ctxsize =3D sizeof(struct hmac_sha224_key), .base.cra_module =3D THIS_MODULE, .digestsize =3D SHA224_DIGEST_SIZE, - .init =3D crypto_sha224_init, - .update =3D crypto_sha256_update_lib, - .final =3D crypto_sha224_final_lib, - .descsize =3D sizeof(struct sha224_ctx), - .statesize =3D sizeof(struct crypto_sha256_state) + - SHA256_BLOCK_SIZE + 1, - .import =3D crypto_sha256_import_lib, - .export =3D crypto_sha256_export_lib, + .setkey =3D crypto_hmac_sha224_setkey, + .init =3D crypto_hmac_sha224_init, + .update =3D crypto_hmac_sha224_update, + .final =3D crypto_hmac_sha224_final, + .digest =3D crypto_hmac_sha224_digest, + .descsize =3D sizeof(struct hmac_sha224_ctx), }, { - .base.cra_name =3D "sha256", - .base.cra_driver_name =3D "sha256-" __stringify(ARCH), + .base.cra_name =3D "hmac(sha256)", + .base.cra_driver_name =3D "hmac-sha256-lib", .base.cra_priority =3D 300, - .base.cra_flags =3D CRYPTO_AHASH_ALG_BLOCK_ONLY | - CRYPTO_AHASH_ALG_FINUP_MAX, .base.cra_blocksize =3D SHA256_BLOCK_SIZE, + .base.cra_ctxsize =3D sizeof(struct hmac_sha256_key), .base.cra_module =3D THIS_MODULE, .digestsize =3D SHA256_DIGEST_SIZE, - .init =3D crypto_sha256_init, - .update =3D crypto_sha256_update_arch, - .finup =3D crypto_sha256_finup_arch, - .digest =3D crypto_sha256_digest_arch, - .descsize =3D sizeof(struct crypto_sha256_state), - }, - { - .base.cra_name =3D "sha224", - .base.cra_driver_name =3D "sha224-" __stringify(ARCH), - .base.cra_priority =3D 300, - .base.cra_flags =3D CRYPTO_AHASH_ALG_BLOCK_ONLY | - CRYPTO_AHASH_ALG_FINUP_MAX, - .base.cra_blocksize =3D SHA224_BLOCK_SIZE, - .base.cra_module =3D THIS_MODULE, - .digestsize =3D SHA224_DIGEST_SIZE, - .init =3D crypto_sha224_init, - .update =3D crypto_sha256_update_arch, - .finup =3D crypto_sha256_finup_arch, - .descsize =3D sizeof(struct crypto_sha256_state), + .setkey =3D crypto_hmac_sha256_setkey, + .init =3D crypto_hmac_sha256_init, + .update =3D crypto_hmac_sha256_update, + .final =3D crypto_hmac_sha256_final, + .digest =3D crypto_hmac_sha256_digest, + .descsize =3D sizeof(struct hmac_sha256_ctx), }, }; =20 -static unsigned int num_algs; - static int __init crypto_sha256_mod_init(void) { - /* register the arch flavours only if they differ from generic */ - num_algs =3D ARRAY_SIZE(algs); - BUILD_BUG_ON(ARRAY_SIZE(algs) <=3D 2); - if (!sha256_is_arch_optimized()) - num_algs -=3D 2; return crypto_register_shashes(algs, ARRAY_SIZE(algs)); } module_init(crypto_sha256_mod_init); =20 static void __exit crypto_sha256_mod_exit(void) { - crypto_unregister_shashes(algs, num_algs); + crypto_unregister_shashes(algs, ARRAY_SIZE(algs)); } module_exit(crypto_sha256_mod_exit); =20 MODULE_LICENSE("GPL"); -MODULE_DESCRIPTION("Crypto API wrapper for the SHA-256 and SHA-224 library= functions"); +MODULE_DESCRIPTION("Crypto API support for SHA-224, SHA-256, HMAC-SHA224, = and HMAC-SHA256"); =20 -MODULE_ALIAS_CRYPTO("sha256"); -MODULE_ALIAS_CRYPTO("sha256-generic"); -MODULE_ALIAS_CRYPTO("sha256-" __stringify(ARCH)); MODULE_ALIAS_CRYPTO("sha224"); -MODULE_ALIAS_CRYPTO("sha224-generic"); -MODULE_ALIAS_CRYPTO("sha224-" __stringify(ARCH)); +MODULE_ALIAS_CRYPTO("sha224-lib"); +MODULE_ALIAS_CRYPTO("sha256"); +MODULE_ALIAS_CRYPTO("sha256-lib"); +MODULE_ALIAS_CRYPTO("hmac(sha224)"); +MODULE_ALIAS_CRYPTO("hmac-sha224-lib"); +MODULE_ALIAS_CRYPTO("hmac(sha256)"); +MODULE_ALIAS_CRYPTO("hmac-sha256-lib"); diff --git a/crypto/testmgr.c b/crypto/testmgr.c index 9d8b11ea4af7f..4e95567f7ed17 100644 --- a/crypto/testmgr.c +++ b/crypto/testmgr.c @@ -4268,45 +4268,51 @@ static const struct alg_test_desc alg_test_descs[] = =3D { .alg =3D "authenc(hmac(sha1),rfc3686(ctr(aes)))", .test =3D alg_test_null, .fips_allowed =3D 1, }, { .alg =3D "authenc(hmac(sha224),cbc(des))", + .generic_driver =3D "authenc(hmac-sha224-lib,cbc(des-generic))", .test =3D alg_test_aead, .suite =3D { .aead =3D __VECS(hmac_sha224_des_cbc_tv_temp) } }, { .alg =3D "authenc(hmac(sha224),cbc(des3_ede))", + .generic_driver =3D "authenc(hmac-sha224-lib,cbc(des3_ede-generic))", .test =3D alg_test_aead, .suite =3D { .aead =3D __VECS(hmac_sha224_des3_ede_cbc_tv_temp) } }, { .alg =3D "authenc(hmac(sha256),cbc(aes))", + .generic_driver =3D "authenc(hmac-sha256-lib,cbc(aes-generic))", .test =3D alg_test_aead, .fips_allowed =3D 1, .suite =3D { .aead =3D __VECS(hmac_sha256_aes_cbc_tv_temp) } }, { .alg =3D "authenc(hmac(sha256),cbc(des))", + .generic_driver =3D "authenc(hmac-sha256-lib,cbc(des-generic))", .test =3D alg_test_aead, .suite =3D { .aead =3D __VECS(hmac_sha256_des_cbc_tv_temp) } }, { .alg =3D "authenc(hmac(sha256),cbc(des3_ede))", + .generic_driver =3D "authenc(hmac-sha256-lib,cbc(des3_ede-generic))", .test =3D alg_test_aead, .suite =3D { .aead =3D __VECS(hmac_sha256_des3_ede_cbc_tv_temp) } }, { .alg =3D "authenc(hmac(sha256),ctr(aes))", .test =3D alg_test_null, .fips_allowed =3D 1, }, { .alg =3D "authenc(hmac(sha256),cts(cbc(aes)))", + .generic_driver =3D "authenc(hmac-sha256-lib,cts(cbc(aes-generic)))", .test =3D alg_test_aead, .suite =3D { .aead =3D __VECS(krb5_test_aes128_cts_hmac_sha256_128) } }, { @@ -5013,17 +5019,19 @@ static const struct alg_test_desc alg_test_descs[] = =3D { .suite =3D { .sig =3D __VECS(ecrdsa_tv_template) } }, { .alg =3D "essiv(authenc(hmac(sha256),cbc(aes)),sha256)", + .generic_driver =3D "essiv(authenc(hmac-sha256-lib,cbc(aes-generic)),sha= 256-lib)", .test =3D alg_test_aead, .fips_allowed =3D 1, .suite =3D { .aead =3D __VECS(essiv_hmac_sha256_aes_cbc_tv_temp) } }, { .alg =3D "essiv(cbc(aes),sha256)", + .generic_driver =3D "essiv(cbc(aes-generic),sha256-lib)", .test =3D alg_test_skcipher, .fips_allowed =3D 1, .suite =3D { .cipher =3D __VECS(essiv_aes_cbc_tv_template) } @@ -5119,17 +5127,19 @@ static const struct alg_test_desc alg_test_descs[] = =3D { .suite =3D { .hash =3D __VECS(hmac_sha1_tv_template) } }, { .alg =3D "hmac(sha224)", + .generic_driver =3D "hmac-sha224-lib", .test =3D alg_test_hash, .fips_allowed =3D 1, .suite =3D { .hash =3D __VECS(hmac_sha224_tv_template) } }, { .alg =3D "hmac(sha256)", + .generic_driver =3D "hmac-sha256-lib", .test =3D alg_test_hash, .fips_allowed =3D 1, .suite =3D { .hash =3D __VECS(hmac_sha256_tv_template) } @@ -5457,17 +5467,19 @@ static const struct alg_test_desc alg_test_descs[] = =3D { .suite =3D { .hash =3D __VECS(sha1_tv_template) } }, { .alg =3D "sha224", + .generic_driver =3D "sha224-lib", .test =3D alg_test_hash, .fips_allowed =3D 1, .suite =3D { .hash =3D __VECS(sha224_tv_template) } }, { .alg =3D "sha256", + .generic_driver =3D "sha256-lib", .test =3D alg_test_hash, .fips_allowed =3D 1, .suite =3D { .hash =3D __VECS(sha256_tv_template) } diff --git a/drivers/crypto/img-hash.c b/drivers/crypto/img-hash.c index e050f5ff5efb6..f312eb075feca 100644 --- a/drivers/crypto/img-hash.c +++ b/drivers/crypto/img-hash.c @@ -708,16 +708,16 @@ static int img_hash_cra_sha1_init(struct crypto_tfm *= tfm) return img_hash_cra_init(tfm, "sha1-generic"); } =20 static int img_hash_cra_sha224_init(struct crypto_tfm *tfm) { - return img_hash_cra_init(tfm, "sha224-generic"); + return img_hash_cra_init(tfm, "sha224-lib"); } =20 static int img_hash_cra_sha256_init(struct crypto_tfm *tfm) { - return img_hash_cra_init(tfm, "sha256-generic"); + return img_hash_cra_init(tfm, "sha256-lib"); } =20 static void img_hash_cra_exit(struct crypto_tfm *tfm) { struct img_hash_ctx *tctx =3D crypto_tfm_ctx(tfm); diff --git a/drivers/crypto/starfive/jh7110-hash.c b/drivers/crypto/starfiv= e/jh7110-hash.c index 4abbff07412ff..6cfe0238f615f 100644 --- a/drivers/crypto/starfive/jh7110-hash.c +++ b/drivers/crypto/starfive/jh7110-hash.c @@ -491,17 +491,17 @@ static int starfive_hash_setkey(struct crypto_ahash *= hash, return starfive_hash_long_setkey(ctx, key, keylen, alg_name); } =20 static int starfive_sha224_init_tfm(struct crypto_ahash *hash) { - return starfive_hash_init_tfm(hash, "sha224-generic", + return starfive_hash_init_tfm(hash, "sha224-lib", STARFIVE_HASH_SHA224, 0); } =20 static int starfive_sha256_init_tfm(struct crypto_ahash *hash) { - return starfive_hash_init_tfm(hash, "sha256-generic", + return starfive_hash_init_tfm(hash, "sha256-lib", STARFIVE_HASH_SHA256, 0); } =20 static int starfive_sha384_init_tfm(struct crypto_ahash *hash) { @@ -521,17 +521,17 @@ static int starfive_sm3_init_tfm(struct crypto_ahash = *hash) STARFIVE_HASH_SM3, 0); } =20 static int starfive_hmac_sha224_init_tfm(struct crypto_ahash *hash) { - return starfive_hash_init_tfm(hash, "hmac(sha224-generic)", + return starfive_hash_init_tfm(hash, "hmac-sha224-lib", STARFIVE_HASH_SHA224, 1); } =20 static int starfive_hmac_sha256_init_tfm(struct crypto_ahash *hash) { - return starfive_hash_init_tfm(hash, "hmac(sha256-generic)", + return starfive_hash_init_tfm(hash, "hmac-sha256-lib", STARFIVE_HASH_SHA256, 1); } =20 static int starfive_hmac_sha384_init_tfm(struct crypto_ahash *hash) { --=20 2.50.0 From nobody Sun Sep 7 11:31:36 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3DB0222126E; Wed, 25 Jun 2025 07:10:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750835434; cv=none; b=k1NnUq93uHuEqe1UDan48h7y6oAhk3Hmck+KAiJTWBkbnZDiffGcnYbFXgKYObcMjM090BFhCddekyxk2cZVh/WpEJWzPMLcKh0svaOjLe1shjnLvn6pi3u8syZWDDmrQ5kriUxDgpN+9A0LykmVoIAlzu7v/8ysaPrUlYCwvEA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750835434; c=relaxed/simple; bh=IkX8c809Bkwa5YKNS0n0XaW5ZPy/jAKfSUJXB4aJ5O4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=g1cj1LGif6d2KS+q+1ksV0jXaotty+PHj6EwrIL0c6tip3ITzS9eVeS/OzhlKG8zsGQkYgjPYGdU8uPEJlNdMvEKeZUdS9FwZLvEa46KWwnKAi8AurJMBOE9I5TR4nRiJk2tXQSTXI4L5HG8j4zxgyCcJBDsRTP5WT59DTYWfug= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=WxdR8L6a; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="WxdR8L6a" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9507DC4CEF3; Wed, 25 Jun 2025 07:10:33 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1750835433; bh=IkX8c809Bkwa5YKNS0n0XaW5ZPy/jAKfSUJXB4aJ5O4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=WxdR8L6aeMZxRKJr8FD2AsRFrBKpUzk1K4lUjaxONdjsjrofU9tvADZLCp7CkzrlK m0K7EkyB+j5p5vLFZt6YAEpKOvT0LVBlHYDPmc3fOUaIicfbg2NYypwvKJ45xTvdGK WDLnxSIouflaXT1XNx17bdfFWsUAHXRg2Ak7p6epGMBwfJGCdOvWmgBbH/JLeo/+bd +6uAIVeAU7zJw/K/Pygyx77H3LapMXvu0bWMAOf5L/L0okQKFfIP2Q83geBPW7cVfI CYF0Du6S/hnBtCKQ/96pidTDm63RqkmUO71j3+3VNjhpIYFQkdWzSe4ss0z4MJZf2H pR0iU1NIwFyPA== From: Eric Biggers To: linux-crypto@vger.kernel.org Cc: linux-kernel@vger.kernel.org, Ard Biesheuvel , "Jason A . Donenfeld" , linux-arm-kernel@lists.infradead.org, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, sparclinux@vger.kernel.org, x86@kernel.org, Eric Biggers Subject: [PATCH 14/18] crypto: sha256 - Use same state format as legacy drivers Date: Wed, 25 Jun 2025 00:08:15 -0700 Message-ID: <20250625070819.1496119-15-ebiggers@kernel.org> X-Mailer: git-send-email 2.50.0 In-Reply-To: <20250625070819.1496119-1-ebiggers@kernel.org> References: <20250625070819.1496119-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Make the export and import functions for the sha224, sha256, hmac(sha224), and hmac(sha256) shash algorithms use the same format as the padlock-sha and nx-sha256 drivers, as required by Herbert. Signed-off-by: Eric Biggers --- crypto/sha256.c | 95 +++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 95 insertions(+) diff --git a/crypto/sha256.c b/crypto/sha256.c index d81166cbba953..052806559f06c 100644 --- a/crypto/sha256.c +++ b/crypto/sha256.c @@ -11,10 +11,47 @@ #include #include #include #include =20 +/* + * Export and import functions. crypto_shash wants a particular format th= at + * matches that used by some legacy drivers. It currently is the same as = the + * library SHA context, except the value in bytecount must be block-aligne= d and + * the remainder must be stored in an extra u8 appended to the struct. + */ + +#define SHA256_SHASH_STATE_SIZE 105 +static_assert(offsetof(struct __sha256_ctx, state) =3D=3D 0); +static_assert(offsetof(struct __sha256_ctx, bytecount) =3D=3D 32); +static_assert(offsetof(struct __sha256_ctx, buf) =3D=3D 40); +static_assert(sizeof(struct __sha256_ctx) + 1 =3D=3D SHA256_SHASH_STATE_SI= ZE); + +static int __crypto_sha256_export(const struct __sha256_ctx *ctx0, void *o= ut) +{ + struct __sha256_ctx ctx =3D *ctx0; + unsigned int partial; + u8 *p =3D out; + + partial =3D ctx.bytecount % SHA256_BLOCK_SIZE; + ctx.bytecount -=3D partial; + memcpy(p, &ctx, sizeof(ctx)); + p +=3D sizeof(ctx); + *p =3D partial; + return 0; +} + +static int __crypto_sha256_import(struct __sha256_ctx *ctx, const void *in) +{ + const u8 *p =3D in; + + memcpy(ctx, p, sizeof(*ctx)); + p +=3D sizeof(*ctx); + ctx->bytecount +=3D *p; + return 0; +} + /* SHA-224 */ =20 const u8 sha224_zero_message_hash[SHA224_DIGEST_SIZE] =3D { 0xd1, 0x4a, 0x02, 0x8c, 0x2a, 0x3a, 0x2b, 0xc9, 0x47, 0x61, 0x02, 0xbb, 0x28, 0x82, 0x34, 0xc4, 0x15, 0xa2, @@ -49,10 +86,20 @@ static int crypto_sha224_digest(struct shash_desc *desc, { sha224(data, len, out); return 0; } =20 +static int crypto_sha224_export(struct shash_desc *desc, void *out) +{ + return __crypto_sha256_export(&SHA224_CTX(desc)->ctx, out); +} + +static int crypto_sha224_import(struct shash_desc *desc, const void *in) +{ + return __crypto_sha256_import(&SHA224_CTX(desc)->ctx, in); +} + /* SHA-256 */ =20 const u8 sha256_zero_message_hash[SHA256_DIGEST_SIZE] =3D { 0xe3, 0xb0, 0xc4, 0x42, 0x98, 0xfc, 0x1c, 0x14, 0x9a, 0xfb, 0xf4, 0xc8, 0x99, 0x6f, 0xb9, 0x24, @@ -87,10 +134,20 @@ static int crypto_sha256_digest(struct shash_desc *des= c, { sha256(data, len, out); return 0; } =20 +static int crypto_sha256_export(struct shash_desc *desc, void *out) +{ + return __crypto_sha256_export(&SHA256_CTX(desc)->ctx, out); +} + +static int crypto_sha256_import(struct shash_desc *desc, const void *in) +{ + return __crypto_sha256_import(&SHA256_CTX(desc)->ctx, in); +} + /* HMAC-SHA224 */ =20 #define HMAC_SHA224_KEY(tfm) ((struct hmac_sha224_key *)crypto_shash_ctx(t= fm)) #define HMAC_SHA224_CTX(desc) ((struct hmac_sha224_ctx *)shash_desc_ctx(de= sc)) =20 @@ -126,10 +183,23 @@ static int crypto_hmac_sha224_digest(struct shash_des= c *desc, { hmac_sha224(HMAC_SHA224_KEY(desc->tfm), data, len, out); return 0; } =20 +static int crypto_hmac_sha224_export(struct shash_desc *desc, void *out) +{ + return __crypto_sha256_export(&HMAC_SHA224_CTX(desc)->ctx.sha_ctx, out); +} + +static int crypto_hmac_sha224_import(struct shash_desc *desc, const void *= in) +{ + struct hmac_sha224_ctx *ctx =3D HMAC_SHA224_CTX(desc); + + ctx->ctx.ostate =3D HMAC_SHA224_KEY(desc->tfm)->key.ostate; + return __crypto_sha256_import(&ctx->ctx.sha_ctx, in); +} + /* HMAC-SHA256 */ =20 #define HMAC_SHA256_KEY(tfm) ((struct hmac_sha256_key *)crypto_shash_ctx(t= fm)) #define HMAC_SHA256_CTX(desc) ((struct hmac_sha256_ctx *)shash_desc_ctx(de= sc)) =20 @@ -165,10 +235,23 @@ static int crypto_hmac_sha256_digest(struct shash_des= c *desc, { hmac_sha256(HMAC_SHA256_KEY(desc->tfm), data, len, out); return 0; } =20 +static int crypto_hmac_sha256_export(struct shash_desc *desc, void *out) +{ + return __crypto_sha256_export(&HMAC_SHA256_CTX(desc)->ctx.sha_ctx, out); +} + +static int crypto_hmac_sha256_import(struct shash_desc *desc, const void *= in) +{ + struct hmac_sha256_ctx *ctx =3D HMAC_SHA256_CTX(desc); + + ctx->ctx.ostate =3D HMAC_SHA256_KEY(desc->tfm)->key.ostate; + return __crypto_sha256_import(&ctx->ctx.sha_ctx, in); +} + /* Algorithm definitions */ =20 static struct shash_alg algs[] =3D { { .base.cra_name =3D "sha224", @@ -179,11 +262,14 @@ static struct shash_alg algs[] =3D { .digestsize =3D SHA224_DIGEST_SIZE, .init =3D crypto_sha224_init, .update =3D crypto_sha224_update, .final =3D crypto_sha224_final, .digest =3D crypto_sha224_digest, + .export =3D crypto_sha224_export, + .import =3D crypto_sha224_import, .descsize =3D sizeof(struct sha224_ctx), + .statesize =3D SHA256_SHASH_STATE_SIZE, }, { .base.cra_name =3D "sha256", .base.cra_driver_name =3D "sha256-lib", .base.cra_priority =3D 300, @@ -192,11 +278,14 @@ static struct shash_alg algs[] =3D { .digestsize =3D SHA256_DIGEST_SIZE, .init =3D crypto_sha256_init, .update =3D crypto_sha256_update, .final =3D crypto_sha256_final, .digest =3D crypto_sha256_digest, + .export =3D crypto_sha256_export, + .import =3D crypto_sha256_import, .descsize =3D sizeof(struct sha256_ctx), + .statesize =3D SHA256_SHASH_STATE_SIZE, }, { .base.cra_name =3D "hmac(sha224)", .base.cra_driver_name =3D "hmac-sha224-lib", .base.cra_priority =3D 300, @@ -207,11 +296,14 @@ static struct shash_alg algs[] =3D { .setkey =3D crypto_hmac_sha224_setkey, .init =3D crypto_hmac_sha224_init, .update =3D crypto_hmac_sha224_update, .final =3D crypto_hmac_sha224_final, .digest =3D crypto_hmac_sha224_digest, + .export =3D crypto_hmac_sha224_export, + .import =3D crypto_hmac_sha224_import, .descsize =3D sizeof(struct hmac_sha224_ctx), + .statesize =3D SHA256_SHASH_STATE_SIZE, }, { .base.cra_name =3D "hmac(sha256)", .base.cra_driver_name =3D "hmac-sha256-lib", .base.cra_priority =3D 300, @@ -222,11 +314,14 @@ static struct shash_alg algs[] =3D { .setkey =3D crypto_hmac_sha256_setkey, .init =3D crypto_hmac_sha256_init, .update =3D crypto_hmac_sha256_update, .final =3D crypto_hmac_sha256_final, .digest =3D crypto_hmac_sha256_digest, + .export =3D crypto_hmac_sha256_export, + .import =3D crypto_hmac_sha256_import, .descsize =3D sizeof(struct hmac_sha256_ctx), + .statesize =3D SHA256_SHASH_STATE_SIZE, }, }; =20 static int __init crypto_sha256_mod_init(void) { --=20 2.50.0 From nobody Sun Sep 7 11:31:36 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 8B4FE221F39; Wed, 25 Jun 2025 07:10:34 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750835434; cv=none; b=rnG3qZ+L5+ecjXLXsl/AR8V2tGx/qc7IAIVl1bJ/V/UCEa3WojSoaB3eRuuEfAh3mPa11/k9fJJ6LMnXvGBgy2cuED4JnVFpLfte1OXaHhXIi0a47urqH//ilus2vraHgUG9zbAIt06qmv4l+a05KwAXatcb/rjGcmPbWuZb9f4= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750835434; c=relaxed/simple; bh=S+CsVx9dQpCGjijF7e+ZbdDM5QBgscfKC4GdP91vltQ=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=uBA/vJuXoRky5QhjuRMqErvxA6cER9U1TBuJSpDHvusXF5cOBiA6bnfqAs1f5AlwRNBl6Q6B4GmtKbGM+Jv80y2+pNVYbIpfcpfxFITYySwXSPK3J5ptOWDNikgJRkpDfyhvrb5JJ3lFFyadudocLd5Hr4JrV8MBDHNLRZuYKJA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=kyYjW82E; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="kyYjW82E" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 1651AC4CEEE; Wed, 25 Jun 2025 07:10:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1750835434; bh=S+CsVx9dQpCGjijF7e+ZbdDM5QBgscfKC4GdP91vltQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=kyYjW82EC/38db+3veMXnZFHE54YSYV+kKjN6MFru4nkzIGgs46VmKUI0bATwPXY4 gOrldjdHacg1a2nKYAdSM9e5WzJKF99M+vkA0mtbd+MFOKqNwPSU3aC4UYtBfPgMfB ialFlK523z2IO+ksv6EWAQLoB8sI/sb9SU3yH1kmSvVxjGY1NEerbkRO4S715T8EJR zC2tV9CDuqId2iSscOziziqU7Gve96UZNpI2G3CrpUMmrKS9N8BLz0fTu2HNx+E9tz Nd4Jx1zyC0AtoiXb8STQlrXgmZFEx+ZTv9FZoarslRIPkYei/QBp+yslpmjgACXVM4 LOWPa81y+8YLg== From: Eric Biggers To: linux-crypto@vger.kernel.org Cc: linux-kernel@vger.kernel.org, Ard Biesheuvel , "Jason A . Donenfeld" , linux-arm-kernel@lists.infradead.org, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, sparclinux@vger.kernel.org, x86@kernel.org, Eric Biggers Subject: [PATCH 15/18] lib/crypto: sha512: Remove sha256_is_arch_optimized() Date: Wed, 25 Jun 2025 00:08:16 -0700 Message-ID: <20250625070819.1496119-16-ebiggers@kernel.org> X-Mailer: git-send-email 2.50.0 In-Reply-To: <20250625070819.1496119-1-ebiggers@kernel.org> References: <20250625070819.1496119-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Remove sha256_is_arch_optimized(), since it is no longer used. Signed-off-by: Eric Biggers --- arch/mips/cavium-octeon/crypto/octeon-sha256.c | 6 ------ include/crypto/internal/sha2.h | 8 -------- lib/crypto/arm/sha256.c | 7 ------- lib/crypto/arm64/sha256.c | 7 ------- lib/crypto/powerpc/sha256.c | 6 ------ lib/crypto/riscv/sha256.c | 6 ------ lib/crypto/s390/sha256.c | 6 ------ lib/crypto/sparc/sha256.c | 6 ------ lib/crypto/x86/sha256.c | 6 ------ 9 files changed, 58 deletions(-) diff --git a/arch/mips/cavium-octeon/crypto/octeon-sha256.c b/arch/mips/cav= ium-octeon/crypto/octeon-sha256.c index f8664818d04ec..c7c67bdc2bd06 100644 --- a/arch/mips/cavium-octeon/crypto/octeon-sha256.c +++ b/arch/mips/cavium-octeon/crypto/octeon-sha256.c @@ -59,14 +59,8 @@ void sha256_blocks_arch(struct sha256_block_state *state, state64[3] =3D read_octeon_64bit_hash_dword(3); octeon_crypto_disable(&cop2_state, flags); } EXPORT_SYMBOL_GPL(sha256_blocks_arch); =20 -bool sha256_is_arch_optimized(void) -{ - return octeon_has_crypto(); -} -EXPORT_SYMBOL_GPL(sha256_is_arch_optimized); - MODULE_LICENSE("GPL"); MODULE_DESCRIPTION("SHA-256 Secure Hash Algorithm (OCTEON)"); MODULE_AUTHOR("Aaro Koskinen "); diff --git a/include/crypto/internal/sha2.h b/include/crypto/internal/sha2.h index 51028484ccdc7..f724e6ad03a9f 100644 --- a/include/crypto/internal/sha2.h +++ b/include/crypto/internal/sha2.h @@ -7,18 +7,10 @@ #include #include #include #include =20 -#if IS_ENABLED(CONFIG_CRYPTO_ARCH_HAVE_LIB_SHA256) -bool sha256_is_arch_optimized(void); -#else -static inline bool sha256_is_arch_optimized(void) -{ - return false; -} -#endif void sha256_blocks_generic(struct sha256_block_state *state, const u8 *data, size_t nblocks); void sha256_blocks_arch(struct sha256_block_state *state, const u8 *data, size_t nblocks); =20 diff --git a/lib/crypto/arm/sha256.c b/lib/crypto/arm/sha256.c index 7d90823586952..27181be0aa92e 100644 --- a/lib/crypto/arm/sha256.c +++ b/lib/crypto/arm/sha256.c @@ -35,17 +35,10 @@ void sha256_blocks_arch(struct sha256_block_state *stat= e, sha256_block_data_order(state, data, nblocks); } } EXPORT_SYMBOL_GPL(sha256_blocks_arch); =20 -bool sha256_is_arch_optimized(void) -{ - /* We always can use at least the ARM scalar implementation. */ - return true; -} -EXPORT_SYMBOL_GPL(sha256_is_arch_optimized); - static int __init sha256_arm_mod_init(void) { if (IS_ENABLED(CONFIG_KERNEL_MODE_NEON) && (elf_hwcap & HWCAP_NEON)) { static_branch_enable(&have_neon); if (elf_hwcap2 & HWCAP2_SHA2) diff --git a/lib/crypto/arm64/sha256.c b/lib/crypto/arm64/sha256.c index 609ffb8151987..a5a4982767089 100644 --- a/lib/crypto/arm64/sha256.c +++ b/lib/crypto/arm64/sha256.c @@ -45,17 +45,10 @@ void sha256_blocks_arch(struct sha256_block_state *stat= e, sha256_block_data_order(state, data, nblocks); } } EXPORT_SYMBOL_GPL(sha256_blocks_arch); =20 -bool sha256_is_arch_optimized(void) -{ - /* We always can use at least the ARM64 scalar implementation. */ - return true; -} -EXPORT_SYMBOL_GPL(sha256_is_arch_optimized); - static int __init sha256_arm64_mod_init(void) { if (IS_ENABLED(CONFIG_KERNEL_MODE_NEON) && cpu_have_named_feature(ASIMD)) { static_branch_enable(&have_neon); diff --git a/lib/crypto/powerpc/sha256.c b/lib/crypto/powerpc/sha256.c index c3f844ae0aceb..437e587b05754 100644 --- a/lib/crypto/powerpc/sha256.c +++ b/lib/crypto/powerpc/sha256.c @@ -58,13 +58,7 @@ void sha256_blocks_arch(struct sha256_block_state *state, nblocks -=3D unit; } while (nblocks); } EXPORT_SYMBOL_GPL(sha256_blocks_arch); =20 -bool sha256_is_arch_optimized(void) -{ - return true; -} -EXPORT_SYMBOL_GPL(sha256_is_arch_optimized); - MODULE_LICENSE("GPL"); MODULE_DESCRIPTION("SHA-256 Secure Hash Algorithm, SPE optimized"); diff --git a/lib/crypto/riscv/sha256.c b/lib/crypto/riscv/sha256.c index a2079aa3ae925..01004cb9c6e9e 100644 --- a/lib/crypto/riscv/sha256.c +++ b/lib/crypto/riscv/sha256.c @@ -32,16 +32,10 @@ void sha256_blocks_arch(struct sha256_block_state *stat= e, sha256_blocks_generic(state, data, nblocks); } } EXPORT_SYMBOL_GPL(sha256_blocks_arch); =20 -bool sha256_is_arch_optimized(void) -{ - return static_key_enabled(&have_extensions); -} -EXPORT_SYMBOL_GPL(sha256_is_arch_optimized); - static int __init riscv64_sha256_mod_init(void) { /* Both zvknha and zvknhb provide the SHA-256 instructions. */ if ((riscv_isa_extension_available(NULL, ZVKNHA) || riscv_isa_extension_available(NULL, ZVKNHB)) && diff --git a/lib/crypto/s390/sha256.c b/lib/crypto/s390/sha256.c index fb565718f7539..6ebfd35a5d44c 100644 --- a/lib/crypto/s390/sha256.c +++ b/lib/crypto/s390/sha256.c @@ -21,16 +21,10 @@ void sha256_blocks_arch(struct sha256_block_state *stat= e, else sha256_blocks_generic(state, data, nblocks); } EXPORT_SYMBOL_GPL(sha256_blocks_arch); =20 -bool sha256_is_arch_optimized(void) -{ - return static_key_enabled(&have_cpacf_sha256); -} -EXPORT_SYMBOL_GPL(sha256_is_arch_optimized); - static int __init sha256_s390_mod_init(void) { if (cpu_have_feature(S390_CPU_FEATURE_MSA) && cpacf_query_func(CPACF_KIMD, CPACF_KIMD_SHA_256)) static_branch_enable(&have_cpacf_sha256); diff --git a/lib/crypto/sparc/sha256.c b/lib/crypto/sparc/sha256.c index 060664b88a6d3..f41c109c1c18d 100644 --- a/lib/crypto/sparc/sha256.c +++ b/lib/crypto/sparc/sha256.c @@ -30,16 +30,10 @@ void sha256_blocks_arch(struct sha256_block_state *stat= e, else sha256_blocks_generic(state, data, nblocks); } EXPORT_SYMBOL_GPL(sha256_blocks_arch); =20 -bool sha256_is_arch_optimized(void) -{ - return static_key_enabled(&have_sha256_opcodes); -} -EXPORT_SYMBOL_GPL(sha256_is_arch_optimized); - static int __init sha256_sparc64_mod_init(void) { unsigned long cfr; =20 if (!(sparc64_elf_hwcap & HWCAP_SPARC_CRYPTO)) diff --git a/lib/crypto/x86/sha256.c b/lib/crypto/x86/sha256.c index cbb45defbefab..9ee38d2b3d572 100644 --- a/lib/crypto/x86/sha256.c +++ b/lib/crypto/x86/sha256.c @@ -35,16 +35,10 @@ void sha256_blocks_arch(struct sha256_block_state *stat= e, sha256_blocks_generic(state, data, nblocks); } } EXPORT_SYMBOL_GPL(sha256_blocks_arch); =20 -bool sha256_is_arch_optimized(void) -{ - return static_key_enabled(&have_sha256_x86); -} -EXPORT_SYMBOL_GPL(sha256_is_arch_optimized); - static int __init sha256_x86_mod_init(void) { if (boot_cpu_has(X86_FEATURE_SHA_NI)) { static_call_update(sha256_blocks_x86, sha256_ni_transform); } else if (cpu_has_xfeatures(XFEATURE_MASK_SSE | --=20 2.50.0 From nobody Sun Sep 7 11:31:36 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 08D1222370A; Wed, 25 Jun 2025 07:10:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750835435; cv=none; b=GQx2LM2b9f6xrjtL1dYQleRKIvO/1dVfmjulQOODE3FVRuwpAJTRoxCoZ9Bb291a5w+iPw6DdjTJFobRl4xNh/afH5OjDu0FVLObA5sbE6NebDLFEu4T6Q8FLPLCLOUMWRRF8OLRTgeUbZoscV8Q+p/k0hiKRbpDzkp8wMvwKnQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750835435; c=relaxed/simple; bh=cKR3QCXsyahPOX5nLXEZFIUr72/lo5LcQbPXUQvaqNs=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=sMQgm/dFZhaVxrmHyzSLBfHSt6Swq5YJfQL2dNGGY3V/fCnrvfoozNzIkMLu5+tJm01pW0tZBaMsOP6dM9mOXIq0HoK43mpVx+U78pnJpCIU8+/bS465gxi6Rk018NbN1MH1qcILQeoqXO0RDYXRPrklGyEpPEI2tbQPOxayN34= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=mZFHvb9W; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="mZFHvb9W" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8CD27C4CEF5; Wed, 25 Jun 2025 07:10:34 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1750835434; bh=cKR3QCXsyahPOX5nLXEZFIUr72/lo5LcQbPXUQvaqNs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=mZFHvb9W4I9L5w5gVTVZ7LssWWyZJWe4+RdJ/qovdZOLftP8n21jm+P/es30JX5vA wXToTPxYu0hbmJjsRB+3mYMkRmnZOj/fbO5oVaqqTjqGY4MAnhGAEJqPo2tAX7mDHx bqmmuBqYdMss3D8CR+iOVibLLmBbBMUU/DnZDbOFSzP5oGxoKBFyeGeDkgqTE9uxoL i1UYm2dAvgH6eo8iG7/3nc3g837Zj7+Ro53wLR1zlDJNdua45n7LrVwAh2TZ9sLNhl Uq0aBSpy4zY7OF2h3thg+mRiwTKZ0EwwbS3YsMAeOpcSUoSfY5gO92cs5cSpwxFTRS jmmgyR4HAg8Jw== From: Eric Biggers To: linux-crypto@vger.kernel.org Cc: linux-kernel@vger.kernel.org, Ard Biesheuvel , "Jason A . Donenfeld" , linux-arm-kernel@lists.infradead.org, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, sparclinux@vger.kernel.org, x86@kernel.org, Eric Biggers Subject: [PATCH 16/18] lib/crypto: sha256: Consolidate into single module Date: Wed, 25 Jun 2025 00:08:17 -0700 Message-ID: <20250625070819.1496119-17-ebiggers@kernel.org> X-Mailer: git-send-email 2.50.0 In-Reply-To: <20250625070819.1496119-1-ebiggers@kernel.org> References: <20250625070819.1496119-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Consolidate the CPU-based SHA-256 code into a single module, following what I did with SHA-512: - Each arch now provides a header file lib/crypto/$(SRCARCH)/sha256.h, replacing lib/crypto/$(SRCARCH)/sha256.c. The header defines sha256_blocks() and optionally sha256_mod_init_arch(). It is included by lib/crypto/sha256.c, and thus the code gets built into the single libsha256 module, with proper inlining and dead code elimination. - sha256_blocks_generic() is moved from lib/crypto/sha256-generic.c into lib/crypto/sha256.c. It's now a static function marked with __maybe_unused, so the compiler automatically eliminates it in any cases where it's not used. - Whether arch-optimized SHA-256 is buildable is now controlled centrally by lib/crypto/Kconfig instead of by lib/crypto/$(SRCARCH)/Kconfig. The conditions for enabling it remain the same as before, and it remains enabled by default. - Any additional arch-specific translation units for the optimized SHA-256 code (such as assembly files) are now compiled by lib/crypto/Makefile instead of lib/crypto/$(SRCARCH)/Makefile. Signed-off-by: Eric Biggers --- arch/mips/cavium-octeon/Kconfig | 6 - arch/mips/cavium-octeon/crypto/Makefile | 1 - include/crypto/internal/sha2.h | 52 ------ lib/crypto/Kconfig | 26 ++- lib/crypto/Makefile | 39 ++++- lib/crypto/arm/Kconfig | 6 - lib/crypto/arm/Makefile | 8 +- lib/crypto/arm/{sha256.c =3D> sha256.h} | 25 +-- lib/crypto/arm64/Kconfig | 5 - lib/crypto/arm64/Makefile | 9 +- lib/crypto/arm64/{sha256.c =3D> sha256.h} | 26 +-- .../crypto/mips/sha256.h | 14 +- lib/crypto/powerpc/Kconfig | 6 - lib/crypto/powerpc/Makefile | 3 - lib/crypto/powerpc/{sha256.c =3D> sha256.h} | 13 +- lib/crypto/riscv/Kconfig | 7 - lib/crypto/riscv/Makefile | 3 - lib/crypto/riscv/{sha256.c =3D> sha256.h} | 24 +-- lib/crypto/s390/Kconfig | 6 - lib/crypto/s390/Makefile | 3 - lib/crypto/s390/{sha256.c =3D> sha256.h} | 23 +-- lib/crypto/sha256-generic.c | 150 ------------------ lib/crypto/sha256.c | 146 +++++++++++++++-- lib/crypto/sparc/Kconfig | 8 - lib/crypto/sparc/Makefile | 4 - lib/crypto/sparc/{sha256.c =3D> sha256.h} | 29 +--- lib/crypto/x86/Kconfig | 7 - lib/crypto/x86/Makefile | 3 - lib/crypto/x86/{sha256.c =3D> sha256.h} | 25 +-- 29 files changed, 222 insertions(+), 455 deletions(-) delete mode 100644 include/crypto/internal/sha2.h rename lib/crypto/arm/{sha256.c =3D> sha256.h} (68%) rename lib/crypto/arm64/{sha256.c =3D> sha256.h} (71%) rename arch/mips/cavium-octeon/crypto/octeon-sha256.c =3D> lib/crypto/mips= /sha256.h (80%) rename lib/crypto/powerpc/{sha256.c =3D> sha256.h} (80%) rename lib/crypto/riscv/{sha256.c =3D> sha256.h} (63%) rename lib/crypto/s390/{sha256.c =3D> sha256.h} (50%) delete mode 100644 lib/crypto/sha256-generic.c delete mode 100644 lib/crypto/sparc/Kconfig delete mode 100644 lib/crypto/sparc/Makefile rename lib/crypto/sparc/{sha256.c =3D> sha256.h} (62%) rename lib/crypto/x86/{sha256.c =3D> sha256.h} (74%) diff --git a/arch/mips/cavium-octeon/Kconfig b/arch/mips/cavium-octeon/Kcon= fig index 11f4aa6e80e9b..450e979ef5d93 100644 --- a/arch/mips/cavium-octeon/Kconfig +++ b/arch/mips/cavium-octeon/Kconfig @@ -21,16 +21,10 @@ config CAVIUM_OCTEON_CVMSEG_SIZE local memory; the larger CVMSEG is, the smaller the cache is. This selects the size of CVMSEG LM, which is in cache blocks. The legally range is from zero to 54 cache blocks (i.e. CVMSEG LM is between zero and 6192 bytes). =20 -config CRYPTO_SHA256_OCTEON - tristate - default CRYPTO_LIB_SHA256 - select CRYPTO_ARCH_HAVE_LIB_SHA256 - select CRYPTO_LIB_SHA256_GENERIC - endif # CPU_CAVIUM_OCTEON =20 if CAVIUM_OCTEON_SOC =20 config CAVIUM_OCTEON_LOCK_L2 diff --git a/arch/mips/cavium-octeon/crypto/Makefile b/arch/mips/cavium-oct= eon/crypto/Makefile index 168b19ef7ce89..db428e4b30bce 100644 --- a/arch/mips/cavium-octeon/crypto/Makefile +++ b/arch/mips/cavium-octeon/crypto/Makefile @@ -5,6 +5,5 @@ =20 obj-y +=3D octeon-crypto.o =20 obj-$(CONFIG_CRYPTO_MD5_OCTEON) +=3D octeon-md5.o obj-$(CONFIG_CRYPTO_SHA1_OCTEON) +=3D octeon-sha1.o -obj-$(CONFIG_CRYPTO_SHA256_OCTEON) +=3D octeon-sha256.o diff --git a/include/crypto/internal/sha2.h b/include/crypto/internal/sha2.h deleted file mode 100644 index f724e6ad03a9f..0000000000000 --- a/include/crypto/internal/sha2.h +++ /dev/null @@ -1,52 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0-only */ - -#ifndef _CRYPTO_INTERNAL_SHA2_H -#define _CRYPTO_INTERNAL_SHA2_H - -#include -#include -#include -#include -#include - -void sha256_blocks_generic(struct sha256_block_state *state, - const u8 *data, size_t nblocks); -void sha256_blocks_arch(struct sha256_block_state *state, - const u8 *data, size_t nblocks); - -static inline void sha256_choose_blocks( - u32 state[SHA256_STATE_WORDS], const u8 *data, size_t nblocks, - bool force_generic, bool force_simd) -{ - if (!IS_ENABLED(CONFIG_CRYPTO_ARCH_HAVE_LIB_SHA256) || force_generic) - sha256_blocks_generic((struct sha256_block_state *)state, data, nblocks); - else - sha256_blocks_arch((struct sha256_block_state *)state, data, nblocks); -} - -static __always_inline void sha256_finup( - struct crypto_sha256_state *sctx, u8 buf[SHA256_BLOCK_SIZE], - size_t len, u8 out[SHA256_DIGEST_SIZE], size_t digest_size, - bool force_generic, bool force_simd) -{ - const size_t bit_offset =3D SHA256_BLOCK_SIZE - 8; - __be64 *bits =3D (__be64 *)&buf[bit_offset]; - int i; - - buf[len++] =3D 0x80; - if (len > bit_offset) { - memset(&buf[len], 0, SHA256_BLOCK_SIZE - len); - sha256_choose_blocks(sctx->state, buf, 1, force_generic, - force_simd); - len =3D 0; - } - - memset(&buf[len], 0, bit_offset - len); - *bits =3D cpu_to_be64(sctx->count << 3); - sha256_choose_blocks(sctx->state, buf, 1, force_generic, force_simd); - - for (i =3D 0; i < digest_size; i +=3D 4) - put_unaligned_be32(sctx->state[i / 4], out + i); -} - -#endif /* _CRYPTO_INTERNAL_SHA2_H */ diff --git a/lib/crypto/Kconfig b/lib/crypto/Kconfig index efc91300ab865..2580bc90c15e2 100644 --- a/lib/crypto/Kconfig +++ b/lib/crypto/Kconfig @@ -144,24 +144,21 @@ config CRYPTO_LIB_SHA256 help Enable the SHA-256 library interface. This interface may be fulfilled by either the generic implementation or an arch-specific one, if one is available and enabled. =20 -config CRYPTO_ARCH_HAVE_LIB_SHA256 +config CRYPTO_LIB_SHA256_ARCH bool - help - Declares whether the architecture provides an arch-specific - accelerated implementation of the SHA-256 library interface. - -config CRYPTO_LIB_SHA256_GENERIC - tristate - default CRYPTO_LIB_SHA256 if !CRYPTO_ARCH_HAVE_LIB_SHA256 - help - This symbol can be selected by arch implementations of the SHA-256 - library interface that require the generic code as a fallback, e.g., - for SIMD implementations. If no arch specific implementation is - enabled, this implementation serves the users of CRYPTO_LIB_SHA256. + depends on CRYPTO_LIB_SHA256 && !UML + default y if ARM && !CPU_V7M + default y if ARM64 + default y if MIPS && CPU_CAVIUM_OCTEON + default y if PPC && SPE + default y if RISCV && 64BIT && RISCV_ISA_V && TOOLCHAIN_HAS_VECTOR_CRYPTO + default y if S390 + default y if SPARC64 + default y if X86_64 =20 config CRYPTO_LIB_SHA512 tristate help The SHA-384, SHA-512, HMAC-SHA384, and HMAC-SHA512 library functions. @@ -199,13 +196,10 @@ if RISCV source "lib/crypto/riscv/Kconfig" endif if S390 source "lib/crypto/s390/Kconfig" endif -if SPARC -source "lib/crypto/sparc/Kconfig" -endif if X86 source "lib/crypto/x86/Kconfig" endif endif =20 diff --git a/lib/crypto/Makefile b/lib/crypto/Makefile index a20b039ea0a8d..d92a7ba2ef0a0 100644 --- a/lib/crypto/Makefile +++ b/lib/crypto/Makefile @@ -66,15 +66,43 @@ libpoly1305-generic-$(CONFIG_ARCH_SUPPORTS_INT128) :=3D= poly1305-donna64.o libpoly1305-generic-y +=3D poly1305-generic.o =20 obj-$(CONFIG_CRYPTO_LIB_SHA1) +=3D libsha1.o libsha1-y :=3D sha1.o =20 -obj-$(CONFIG_CRYPTO_LIB_SHA256) +=3D libsha256.o -libsha256-y :=3D sha256.o +##########################################################################= ###### =20 -obj-$(CONFIG_CRYPTO_LIB_SHA256_GENERIC) +=3D libsha256-generic.o -libsha256-generic-y :=3D sha256-generic.o +obj-$(CONFIG_CRYPTO_LIB_SHA256) +=3D libsha256.o +libsha256-y :=3D sha256.o +ifeq ($(CONFIG_CRYPTO_LIB_SHA256_ARCH),y) +CFLAGS_sha256.o +=3D -I$(src)/$(SRCARCH) + +ifeq ($(CONFIG_ARM),y) +libsha256-y +=3D arm/sha256-ce.o arm/sha256-core.o +$(obj)/arm/sha256-core.S: $(src)/arm/sha256-armv4.pl + $(call cmd,perlasm) +clean-files +=3D arm/sha256-core.S +AFLAGS_arm/sha256-core.o +=3D $(aflags-thumb2-y) +endif + +ifeq ($(CONFIG_ARM64),y) +libsha256-y +=3D arm64/sha256-core.o +$(obj)/arm64/sha256-core.S: $(src)/arm64/sha2-armv8.pl + $(call cmd,perlasm_with_args) +clean-files +=3D arm64/sha256-core.S +libsha256-$(CONFIG_KERNEL_MODE_NEON) +=3D arm64/sha256-ce.o +endif + +libsha256-$(CONFIG_PPC) +=3D powerpc/sha256-spe-asm.o +libsha256-$(CONFIG_RISCV) +=3D riscv/sha256-riscv64-zvknha_or_zvknhb-zvkb.o +libsha256-$(CONFIG_SPARC) +=3D sparc/sha256_asm.o +libsha256-$(CONFIG_X86) +=3D x86/sha256-ssse3-asm.o \ + x86/sha256-avx-asm.o \ + x86/sha256-avx2-asm.o \ + x86/sha256-ni-asm.o +endif # CONFIG_CRYPTO_LIB_SHA256_ARCH + +##########################################################################= ###### =20 obj-$(CONFIG_CRYPTO_LIB_SHA512) +=3D libsha512.o libsha512-y :=3D sha512.o ifeq ($(CONFIG_CRYPTO_LIB_SHA512_ARCH),y) CFLAGS_sha512.o +=3D -I$(src)/$(SRCARCH) @@ -100,10 +128,12 @@ libsha512-$(CONFIG_SPARC) +=3D sparc/sha512_asm.o libsha512-$(CONFIG_X86) +=3D x86/sha512-ssse3-asm.o \ x86/sha512-avx-asm.o \ x86/sha512-avx2-asm.o endif # CONFIG_CRYPTO_LIB_SHA512_ARCH =20 +##########################################################################= ###### + obj-$(CONFIG_MPILIB) +=3D mpi/ =20 obj-$(CONFIG_CRYPTO_SELFTESTS_FULL) +=3D simd.o =20 obj-$(CONFIG_CRYPTO_LIB_SM3) +=3D libsm3.o @@ -113,7 +143,6 @@ obj-$(CONFIG_ARM) +=3D arm/ obj-$(CONFIG_ARM64) +=3D arm64/ obj-$(CONFIG_MIPS) +=3D mips/ obj-$(CONFIG_PPC) +=3D powerpc/ obj-$(CONFIG_RISCV) +=3D riscv/ obj-$(CONFIG_S390) +=3D s390/ -obj-$(CONFIG_SPARC) +=3D sparc/ obj-$(CONFIG_X86) +=3D x86/ diff --git a/lib/crypto/arm/Kconfig b/lib/crypto/arm/Kconfig index 9f3ff30f40328..e8444fd0aae30 100644 --- a/lib/crypto/arm/Kconfig +++ b/lib/crypto/arm/Kconfig @@ -20,11 +20,5 @@ config CRYPTO_CHACHA20_NEON =20 config CRYPTO_POLY1305_ARM tristate default CRYPTO_LIB_POLY1305 select CRYPTO_ARCH_HAVE_LIB_POLY1305 - -config CRYPTO_SHA256_ARM - tristate - depends on !CPU_V7M - default CRYPTO_LIB_SHA256 - select CRYPTO_ARCH_HAVE_LIB_SHA256 diff --git a/lib/crypto/arm/Makefile b/lib/crypto/arm/Makefile index 431f77c3ff6fd..4c042a4c77ed6 100644 --- a/lib/crypto/arm/Makefile +++ b/lib/crypto/arm/Makefile @@ -8,25 +8,19 @@ chacha-neon-y :=3D chacha-scalar-core.o chacha-glue.o chacha-neon-$(CONFIG_KERNEL_MODE_NEON) +=3D chacha-neon-core.o =20 obj-$(CONFIG_CRYPTO_POLY1305_ARM) +=3D poly1305-arm.o poly1305-arm-y :=3D poly1305-core.o poly1305-glue.o =20 -obj-$(CONFIG_CRYPTO_SHA256_ARM) +=3D sha256-arm.o -sha256-arm-y :=3D sha256.o sha256-core.o -sha256-arm-$(CONFIG_KERNEL_MODE_NEON) +=3D sha256-ce.o - quiet_cmd_perl =3D PERL $@ cmd_perl =3D $(PERL) $(<) > $(@) =20 $(obj)/%-core.S: $(src)/%-armv4.pl $(call cmd,perl) =20 -clean-files +=3D poly1305-core.S sha256-core.S +clean-files +=3D poly1305-core.S =20 aflags-thumb2-$(CONFIG_THUMB2_KERNEL) :=3D -U__thumb2__ -D__thumb2__=3D1 =20 # massage the perlasm code a bit so we only get the NEON routine if we nee= d it poly1305-aflags-$(CONFIG_CPU_V7) :=3D -U__LINUX_ARM_ARCH__ -D__LINUX_ARM_A= RCH__=3D5 poly1305-aflags-$(CONFIG_KERNEL_MODE_NEON) :=3D -U__LINUX_ARM_ARCH__ -D__L= INUX_ARM_ARCH__=3D7 AFLAGS_poly1305-core.o +=3D $(poly1305-aflags-y) $(aflags-thumb2-y) - -AFLAGS_sha256-core.o +=3D $(aflags-thumb2-y) diff --git a/lib/crypto/arm/sha256.c b/lib/crypto/arm/sha256.h similarity index 68% rename from lib/crypto/arm/sha256.c rename to lib/crypto/arm/sha256.h index 27181be0aa92e..555b3fe3ce8b1 100644 --- a/lib/crypto/arm/sha256.c +++ b/lib/crypto/arm/sha256.h @@ -1,16 +1,13 @@ -// SPDX-License-Identifier: GPL-2.0-or-later +/* SPDX-License-Identifier: GPL-2.0-or-later */ /* * SHA-256 optimized for ARM * * Copyright 2025 Google LLC */ #include -#include #include -#include -#include =20 asmlinkage void sha256_block_data_order(struct sha256_block_state *state, const u8 *data, size_t nblocks); asmlinkage void sha256_block_data_order_neon(struct sha256_block_state *st= ate, const u8 *data, size_t nblocks); @@ -18,12 +15,12 @@ asmlinkage void sha256_ce_transform(struct sha256_block= _state *state, const u8 *data, size_t nblocks); =20 static __ro_after_init DEFINE_STATIC_KEY_FALSE(have_neon); static __ro_after_init DEFINE_STATIC_KEY_FALSE(have_ce); =20 -void sha256_blocks_arch(struct sha256_block_state *state, - const u8 *data, size_t nblocks) +static void sha256_blocks(struct sha256_block_state *state, + const u8 *data, size_t nblocks) { if (IS_ENABLED(CONFIG_KERNEL_MODE_NEON) && static_branch_likely(&have_neon) && crypto_simd_usable()) { kernel_neon_begin(); if (static_branch_likely(&have_ce)) @@ -33,25 +30,17 @@ void sha256_blocks_arch(struct sha256_block_state *stat= e, kernel_neon_end(); } else { sha256_block_data_order(state, data, nblocks); } } -EXPORT_SYMBOL_GPL(sha256_blocks_arch); =20 -static int __init sha256_arm_mod_init(void) +#ifdef CONFIG_KERNEL_MODE_NEON +#define sha256_mod_init_arch sha256_mod_init_arch +static inline void sha256_mod_init_arch(void) { if (IS_ENABLED(CONFIG_KERNEL_MODE_NEON) && (elf_hwcap & HWCAP_NEON)) { static_branch_enable(&have_neon); if (elf_hwcap2 & HWCAP2_SHA2) static_branch_enable(&have_ce); } - return 0; } -subsys_initcall(sha256_arm_mod_init); - -static void __exit sha256_arm_mod_exit(void) -{ -} -module_exit(sha256_arm_mod_exit); - -MODULE_LICENSE("GPL"); -MODULE_DESCRIPTION("SHA-256 optimized for ARM"); +#endif /* CONFIG_KERNEL_MODE_NEON */ diff --git a/lib/crypto/arm64/Kconfig b/lib/crypto/arm64/Kconfig index 49e57bfdb5b52..0b903ef524d85 100644 --- a/lib/crypto/arm64/Kconfig +++ b/lib/crypto/arm64/Kconfig @@ -10,10 +10,5 @@ config CRYPTO_CHACHA20_NEON config CRYPTO_POLY1305_NEON tristate depends on KERNEL_MODE_NEON default CRYPTO_LIB_POLY1305 select CRYPTO_ARCH_HAVE_LIB_POLY1305 - -config CRYPTO_SHA256_ARM64 - tristate - default CRYPTO_LIB_SHA256 - select CRYPTO_ARCH_HAVE_LIB_SHA256 diff --git a/lib/crypto/arm64/Makefile b/lib/crypto/arm64/Makefile index 946c099037117..6207088397a73 100644 --- a/lib/crypto/arm64/Makefile +++ b/lib/crypto/arm64/Makefile @@ -6,19 +6,12 @@ chacha-neon-y :=3D chacha-neon-core.o chacha-neon-glue.o obj-$(CONFIG_CRYPTO_POLY1305_NEON) +=3D poly1305-neon.o poly1305-neon-y :=3D poly1305-core.o poly1305-glue.o AFLAGS_poly1305-core.o +=3D -Dpoly1305_init=3Dpoly1305_block_init_arch AFLAGS_poly1305-core.o +=3D -Dpoly1305_emit=3Dpoly1305_emit_arch =20 -obj-$(CONFIG_CRYPTO_SHA256_ARM64) +=3D sha256-arm64.o -sha256-arm64-y :=3D sha256.o sha256-core.o -sha256-arm64-$(CONFIG_KERNEL_MODE_NEON) +=3D sha256-ce.o - quiet_cmd_perlasm =3D PERLASM $@ cmd_perlasm =3D $(PERL) $(<) void $(@) =20 $(obj)/%-core.S: $(src)/%-armv8.pl $(call cmd,perlasm) =20 -$(obj)/sha256-core.S: $(src)/sha2-armv8.pl - $(call cmd,perlasm) - -clean-files +=3D poly1305-core.S sha256-core.S +clean-files +=3D poly1305-core.S diff --git a/lib/crypto/arm64/sha256.c b/lib/crypto/arm64/sha256.h similarity index 71% rename from lib/crypto/arm64/sha256.c rename to lib/crypto/arm64/sha256.h index a5a4982767089..c70eaa9b0d65e 100644 --- a/lib/crypto/arm64/sha256.c +++ b/lib/crypto/arm64/sha256.h @@ -1,16 +1,14 @@ -// SPDX-License-Identifier: GPL-2.0-or-later +/* SPDX-License-Identifier: GPL-2.0-or-later */ /* * SHA-256 optimized for ARM64 * * Copyright 2025 Google LLC */ #include -#include #include -#include -#include +#include =20 asmlinkage void sha256_block_data_order(struct sha256_block_state *state, const u8 *data, size_t nblocks); asmlinkage void sha256_block_neon(struct sha256_block_state *state, const u8 *data, size_t nblocks); @@ -18,12 +16,12 @@ asmlinkage size_t __sha256_ce_transform(struct sha256_b= lock_state *state, const u8 *data, size_t nblocks); =20 static __ro_after_init DEFINE_STATIC_KEY_FALSE(have_neon); static __ro_after_init DEFINE_STATIC_KEY_FALSE(have_ce); =20 -void sha256_blocks_arch(struct sha256_block_state *state, - const u8 *data, size_t nblocks) +static void sha256_blocks(struct sha256_block_state *state, + const u8 *data, size_t nblocks) { if (IS_ENABLED(CONFIG_KERNEL_MODE_NEON) && static_branch_likely(&have_neon) && crypto_simd_usable()) { if (static_branch_likely(&have_ce)) { do { @@ -43,26 +41,18 @@ void sha256_blocks_arch(struct sha256_block_state *stat= e, } } else { sha256_block_data_order(state, data, nblocks); } } -EXPORT_SYMBOL_GPL(sha256_blocks_arch); =20 -static int __init sha256_arm64_mod_init(void) +#ifdef CONFIG_KERNEL_MODE_NEON +#define sha256_mod_init_arch sha256_mod_init_arch +static inline void sha256_mod_init_arch(void) { if (IS_ENABLED(CONFIG_KERNEL_MODE_NEON) && cpu_have_named_feature(ASIMD)) { static_branch_enable(&have_neon); if (cpu_have_named_feature(SHA2)) static_branch_enable(&have_ce); } - return 0; } -subsys_initcall(sha256_arm64_mod_init); - -static void __exit sha256_arm64_mod_exit(void) -{ -} -module_exit(sha256_arm64_mod_exit); - -MODULE_LICENSE("GPL"); -MODULE_DESCRIPTION("SHA-256 optimized for ARM64"); +#endif /* CONFIG_KERNEL_MODE_NEON */ diff --git a/arch/mips/cavium-octeon/crypto/octeon-sha256.c b/lib/crypto/mi= ps/sha256.h similarity index 80% rename from arch/mips/cavium-octeon/crypto/octeon-sha256.c rename to lib/crypto/mips/sha256.h index c7c67bdc2bd06..ccccfd131634b 100644 --- a/arch/mips/cavium-octeon/crypto/octeon-sha256.c +++ b/lib/crypto/mips/sha256.h @@ -1,6 +1,6 @@ -// SPDX-License-Identifier: GPL-2.0-or-later +/* SPDX-License-Identifier: GPL-2.0-or-later */ /* * SHA-256 Secure Hash Algorithm. * * Adapted for OCTEON by Aaro Koskinen . * @@ -12,20 +12,17 @@ * SHA224 Support Copyright 2007 Intel Corporation */ =20 #include #include -#include -#include -#include =20 /* * We pass everything as 64-bit. OCTEON can handle misaligned data. */ =20 -void sha256_blocks_arch(struct sha256_block_state *state, - const u8 *data, size_t nblocks) +static void sha256_blocks(struct sha256_block_state *state, + const u8 *data, size_t nblocks) { struct octeon_cop2_state cop2_state; u64 *state64 =3D (u64 *)state; unsigned long flags; =20 @@ -57,10 +54,5 @@ void sha256_blocks_arch(struct sha256_block_state *state, state64[1] =3D read_octeon_64bit_hash_dword(1); state64[2] =3D read_octeon_64bit_hash_dword(2); state64[3] =3D read_octeon_64bit_hash_dword(3); octeon_crypto_disable(&cop2_state, flags); } -EXPORT_SYMBOL_GPL(sha256_blocks_arch); - -MODULE_LICENSE("GPL"); -MODULE_DESCRIPTION("SHA-256 Secure Hash Algorithm (OCTEON)"); -MODULE_AUTHOR("Aaro Koskinen "); diff --git a/lib/crypto/powerpc/Kconfig b/lib/crypto/powerpc/Kconfig index 3f9e1bbd9905b..2eaeb7665a6a0 100644 --- a/lib/crypto/powerpc/Kconfig +++ b/lib/crypto/powerpc/Kconfig @@ -12,11 +12,5 @@ config CRYPTO_POLY1305_P10 depends on PPC64 && CPU_LITTLE_ENDIAN && VSX depends on BROKEN # Needs to be fixed to work in softirq context default CRYPTO_LIB_POLY1305 select CRYPTO_ARCH_HAVE_LIB_POLY1305 select CRYPTO_LIB_POLY1305_GENERIC - -config CRYPTO_SHA256_PPC_SPE - tristate - depends on SPE - default CRYPTO_LIB_SHA256 - select CRYPTO_ARCH_HAVE_LIB_SHA256 diff --git a/lib/crypto/powerpc/Makefile b/lib/crypto/powerpc/Makefile index 27f231f8e334a..5709ae14258a0 100644 --- a/lib/crypto/powerpc/Makefile +++ b/lib/crypto/powerpc/Makefile @@ -3,8 +3,5 @@ obj-$(CONFIG_CRYPTO_CHACHA20_P10) +=3D chacha-p10-crypto.o chacha-p10-crypto-y :=3D chacha-p10-glue.o chacha-p10le-8x.o =20 obj-$(CONFIG_CRYPTO_POLY1305_P10) +=3D poly1305-p10-crypto.o poly1305-p10-crypto-y :=3D poly1305-p10-glue.o poly1305-p10le_64.o - -obj-$(CONFIG_CRYPTO_SHA256_PPC_SPE) +=3D sha256-ppc-spe.o -sha256-ppc-spe-y :=3D sha256.o sha256-spe-asm.o diff --git a/lib/crypto/powerpc/sha256.c b/lib/crypto/powerpc/sha256.h similarity index 80% rename from lib/crypto/powerpc/sha256.c rename to lib/crypto/powerpc/sha256.h index 437e587b05754..d923698de5745 100644 --- a/lib/crypto/powerpc/sha256.c +++ b/lib/crypto/powerpc/sha256.h @@ -1,19 +1,16 @@ -// SPDX-License-Identifier: GPL-2.0-or-later +/* SPDX-License-Identifier: GPL-2.0-or-later */ /* * SHA-256 Secure Hash Algorithm, SPE optimized * * Based on generic implementation. The assembler module takes care * about the SPE registers so it can run from interrupt context. * * Copyright (c) 2015 Markus Stockhausen */ =20 #include -#include -#include -#include #include =20 /* * MAX_BYTES defines the number of bytes that are allowed to be processed * between preempt_disable() and preempt_enable(). SHA256 takes ~2,000 @@ -40,12 +37,12 @@ static void spe_end(void) disable_kernel_spe(); /* reenable preemption */ preempt_enable(); } =20 -void sha256_blocks_arch(struct sha256_block_state *state, - const u8 *data, size_t nblocks) +static void sha256_blocks(struct sha256_block_state *state, + const u8 *data, size_t nblocks) { do { /* cut input data into smaller blocks */ u32 unit =3D min_t(size_t, nblocks, MAX_BYTES / SHA256_BLOCK_SIZE); @@ -56,9 +53,5 @@ void sha256_blocks_arch(struct sha256_block_state *state, =20 data +=3D unit * SHA256_BLOCK_SIZE; nblocks -=3D unit; } while (nblocks); } -EXPORT_SYMBOL_GPL(sha256_blocks_arch); - -MODULE_LICENSE("GPL"); -MODULE_DESCRIPTION("SHA-256 Secure Hash Algorithm, SPE optimized"); diff --git a/lib/crypto/riscv/Kconfig b/lib/crypto/riscv/Kconfig index c100571feb7e8..bc7a43f33eb3a 100644 --- a/lib/crypto/riscv/Kconfig +++ b/lib/crypto/riscv/Kconfig @@ -4,12 +4,5 @@ config CRYPTO_CHACHA_RISCV64 tristate depends on 64BIT && RISCV_ISA_V && TOOLCHAIN_HAS_VECTOR_CRYPTO default CRYPTO_LIB_CHACHA select CRYPTO_ARCH_HAVE_LIB_CHACHA select CRYPTO_LIB_CHACHA_GENERIC - -config CRYPTO_SHA256_RISCV64 - tristate - depends on 64BIT && RISCV_ISA_V && TOOLCHAIN_HAS_VECTOR_CRYPTO - default CRYPTO_LIB_SHA256 - select CRYPTO_ARCH_HAVE_LIB_SHA256 - select CRYPTO_LIB_SHA256_GENERIC diff --git a/lib/crypto/riscv/Makefile b/lib/crypto/riscv/Makefile index b7cb877a2c07e..e27b78f317fc8 100644 --- a/lib/crypto/riscv/Makefile +++ b/lib/crypto/riscv/Makefile @@ -1,7 +1,4 @@ # SPDX-License-Identifier: GPL-2.0-only =20 obj-$(CONFIG_CRYPTO_CHACHA_RISCV64) +=3D chacha-riscv64.o chacha-riscv64-y :=3D chacha-riscv64-glue.o chacha-riscv64-zvkb.o - -obj-$(CONFIG_CRYPTO_SHA256_RISCV64) +=3D sha256-riscv64.o -sha256-riscv64-y :=3D sha256.o sha256-riscv64-zvknha_or_zvknhb-zvkb.o diff --git a/lib/crypto/riscv/sha256.c b/lib/crypto/riscv/sha256.h similarity index 63% rename from lib/crypto/riscv/sha256.c rename to lib/crypto/riscv/sha256.h index 01004cb9c6e9e..c0f79c18f1199 100644 --- a/lib/crypto/riscv/sha256.c +++ b/lib/crypto/riscv/sha256.h @@ -1,6 +1,6 @@ -// SPDX-License-Identifier: GPL-2.0-or-later +/* SPDX-License-Identifier: GPL-2.0-or-later */ /* * SHA-256 (RISC-V accelerated) * * Copyright (C) 2022 VRULL GmbH * Author: Heiko Stuebner @@ -8,49 +8,35 @@ * Copyright (C) 2023 SiFive, Inc. * Author: Jerry Shih */ =20 #include -#include #include -#include -#include =20 asmlinkage void sha256_transform_zvknha_or_zvknhb_zvkb(struct sha256_block_state *state, const u8 *data, size_t nblocks); =20 static __ro_after_init DEFINE_STATIC_KEY_FALSE(have_extensions); =20 -void sha256_blocks_arch(struct sha256_block_state *state, - const u8 *data, size_t nblocks) +static void sha256_blocks(struct sha256_block_state *state, + const u8 *data, size_t nblocks) { if (static_branch_likely(&have_extensions) && crypto_simd_usable()) { kernel_vector_begin(); sha256_transform_zvknha_or_zvknhb_zvkb(state, data, nblocks); kernel_vector_end(); } else { sha256_blocks_generic(state, data, nblocks); } } -EXPORT_SYMBOL_GPL(sha256_blocks_arch); =20 -static int __init riscv64_sha256_mod_init(void) +#define sha256_mod_init_arch sha256_mod_init_arch +static inline void sha256_mod_init_arch(void) { /* Both zvknha and zvknhb provide the SHA-256 instructions. */ if ((riscv_isa_extension_available(NULL, ZVKNHA) || riscv_isa_extension_available(NULL, ZVKNHB)) && riscv_isa_extension_available(NULL, ZVKB) && riscv_vector_vlen() >=3D 128) static_branch_enable(&have_extensions); - return 0; } -subsys_initcall(riscv64_sha256_mod_init); - -static void __exit riscv64_sha256_mod_exit(void) -{ -} -module_exit(riscv64_sha256_mod_exit); - -MODULE_DESCRIPTION("SHA-256 (RISC-V accelerated)"); -MODULE_AUTHOR("Heiko Stuebner "); -MODULE_LICENSE("GPL"); diff --git a/lib/crypto/s390/Kconfig b/lib/crypto/s390/Kconfig index e3f855ef43934..069b355fe51aa 100644 --- a/lib/crypto/s390/Kconfig +++ b/lib/crypto/s390/Kconfig @@ -3,11 +3,5 @@ config CRYPTO_CHACHA_S390 tristate default CRYPTO_LIB_CHACHA select CRYPTO_LIB_CHACHA_GENERIC select CRYPTO_ARCH_HAVE_LIB_CHACHA - -config CRYPTO_SHA256_S390 - tristate - default CRYPTO_LIB_SHA256 - select CRYPTO_ARCH_HAVE_LIB_SHA256 - select CRYPTO_LIB_SHA256_GENERIC diff --git a/lib/crypto/s390/Makefile b/lib/crypto/s390/Makefile index 5df30f1e79307..06c2cf77178ef 100644 --- a/lib/crypto/s390/Makefile +++ b/lib/crypto/s390/Makefile @@ -1,7 +1,4 @@ # SPDX-License-Identifier: GPL-2.0-only =20 obj-$(CONFIG_CRYPTO_CHACHA_S390) +=3D chacha_s390.o chacha_s390-y :=3D chacha-glue.o chacha-s390.o - -obj-$(CONFIG_CRYPTO_SHA256_S390) +=3D sha256-s390.o -sha256-s390-y :=3D sha256.o diff --git a/lib/crypto/s390/sha256.c b/lib/crypto/s390/sha256.h similarity index 50% rename from lib/crypto/s390/sha256.c rename to lib/crypto/s390/sha256.h index 6ebfd35a5d44c..70a81cbc06b2c 100644 --- a/lib/crypto/s390/sha256.c +++ b/lib/crypto/s390/sha256.h @@ -1,41 +1,28 @@ -// SPDX-License-Identifier: GPL-2.0-or-later +/* SPDX-License-Identifier: GPL-2.0-or-later */ /* * SHA-256 optimized using the CP Assist for Cryptographic Functions (CPAC= F) * * Copyright 2025 Google LLC */ #include -#include #include -#include -#include =20 static __ro_after_init DEFINE_STATIC_KEY_FALSE(have_cpacf_sha256); =20 -void sha256_blocks_arch(struct sha256_block_state *state, - const u8 *data, size_t nblocks) +static void sha256_blocks(struct sha256_block_state *state, + const u8 *data, size_t nblocks) { if (static_branch_likely(&have_cpacf_sha256)) cpacf_kimd(CPACF_KIMD_SHA_256, state, data, nblocks * SHA256_BLOCK_SIZE); else sha256_blocks_generic(state, data, nblocks); } -EXPORT_SYMBOL_GPL(sha256_blocks_arch); =20 -static int __init sha256_s390_mod_init(void) +#define sha256_mod_init_arch sha256_mod_init_arch +static inline void sha256_mod_init_arch(void) { if (cpu_have_feature(S390_CPU_FEATURE_MSA) && cpacf_query_func(CPACF_KIMD, CPACF_KIMD_SHA_256)) static_branch_enable(&have_cpacf_sha256); - return 0; } -subsys_initcall(sha256_s390_mod_init); - -static void __exit sha256_s390_mod_exit(void) -{ -} -module_exit(sha256_s390_mod_exit); - -MODULE_LICENSE("GPL"); -MODULE_DESCRIPTION("SHA-256 using the CP Assist for Cryptographic Function= s (CPACF)"); diff --git a/lib/crypto/sha256-generic.c b/lib/crypto/sha256-generic.c deleted file mode 100644 index 99f904033c261..0000000000000 --- a/lib/crypto/sha256-generic.c +++ /dev/null @@ -1,150 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0-or-later -/* - * SHA-256, as specified in - * http://csrc.nist.gov/groups/STM/cavp/documents/shs/sha256-384-512.pdf - * - * SHA-256 code by Jean-Luc Cooke . - * - * Copyright (c) Jean-Luc Cooke - * Copyright (c) Andrew McDonald - * Copyright (c) 2002 James Morris - * Copyright (c) 2014 Red Hat Inc. - */ - -#include -#include -#include -#include -#include -#include - -static const u32 SHA256_K[] =3D { - 0x428a2f98, 0x71374491, 0xb5c0fbcf, 0xe9b5dba5, - 0x3956c25b, 0x59f111f1, 0x923f82a4, 0xab1c5ed5, - 0xd807aa98, 0x12835b01, 0x243185be, 0x550c7dc3, - 0x72be5d74, 0x80deb1fe, 0x9bdc06a7, 0xc19bf174, - 0xe49b69c1, 0xefbe4786, 0x0fc19dc6, 0x240ca1cc, - 0x2de92c6f, 0x4a7484aa, 0x5cb0a9dc, 0x76f988da, - 0x983e5152, 0xa831c66d, 0xb00327c8, 0xbf597fc7, - 0xc6e00bf3, 0xd5a79147, 0x06ca6351, 0x14292967, - 0x27b70a85, 0x2e1b2138, 0x4d2c6dfc, 0x53380d13, - 0x650a7354, 0x766a0abb, 0x81c2c92e, 0x92722c85, - 0xa2bfe8a1, 0xa81a664b, 0xc24b8b70, 0xc76c51a3, - 0xd192e819, 0xd6990624, 0xf40e3585, 0x106aa070, - 0x19a4c116, 0x1e376c08, 0x2748774c, 0x34b0bcb5, - 0x391c0cb3, 0x4ed8aa4a, 0x5b9cca4f, 0x682e6ff3, - 0x748f82ee, 0x78a5636f, 0x84c87814, 0x8cc70208, - 0x90befffa, 0xa4506ceb, 0xbef9a3f7, 0xc67178f2, -}; - -static inline u32 Ch(u32 x, u32 y, u32 z) -{ - return z ^ (x & (y ^ z)); -} - -static inline u32 Maj(u32 x, u32 y, u32 z) -{ - return (x & y) | (z & (x | y)); -} - -#define e0(x) (ror32(x, 2) ^ ror32(x, 13) ^ ror32(x, 22)) -#define e1(x) (ror32(x, 6) ^ ror32(x, 11) ^ ror32(x, 25)) -#define s0(x) (ror32(x, 7) ^ ror32(x, 18) ^ (x >> 3)) -#define s1(x) (ror32(x, 17) ^ ror32(x, 19) ^ (x >> 10)) - -static inline void LOAD_OP(int I, u32 *W, const u8 *input) -{ - W[I] =3D get_unaligned_be32((__u32 *)input + I); -} - -static inline void BLEND_OP(int I, u32 *W) -{ - W[I] =3D s1(W[I-2]) + W[I-7] + s0(W[I-15]) + W[I-16]; -} - -#define SHA256_ROUND(i, a, b, c, d, e, f, g, h) do { \ - u32 t1, t2; \ - t1 =3D h + e1(e) + Ch(e, f, g) + SHA256_K[i] + W[i]; \ - t2 =3D e0(a) + Maj(a, b, c); \ - d +=3D t1; \ - h =3D t1 + t2; \ -} while (0) - -static void sha256_block_generic(struct sha256_block_state *state, - const u8 *input, u32 W[64]) -{ - u32 a, b, c, d, e, f, g, h; - int i; - - /* load the input */ - for (i =3D 0; i < 16; i +=3D 8) { - LOAD_OP(i + 0, W, input); - LOAD_OP(i + 1, W, input); - LOAD_OP(i + 2, W, input); - LOAD_OP(i + 3, W, input); - LOAD_OP(i + 4, W, input); - LOAD_OP(i + 5, W, input); - LOAD_OP(i + 6, W, input); - LOAD_OP(i + 7, W, input); - } - - /* now blend */ - for (i =3D 16; i < 64; i +=3D 8) { - BLEND_OP(i + 0, W); - BLEND_OP(i + 1, W); - BLEND_OP(i + 2, W); - BLEND_OP(i + 3, W); - BLEND_OP(i + 4, W); - BLEND_OP(i + 5, W); - BLEND_OP(i + 6, W); - BLEND_OP(i + 7, W); - } - - /* load the state into our registers */ - a =3D state->h[0]; - b =3D state->h[1]; - c =3D state->h[2]; - d =3D state->h[3]; - e =3D state->h[4]; - f =3D state->h[5]; - g =3D state->h[6]; - h =3D state->h[7]; - - /* now iterate */ - for (i =3D 0; i < 64; i +=3D 8) { - SHA256_ROUND(i + 0, a, b, c, d, e, f, g, h); - SHA256_ROUND(i + 1, h, a, b, c, d, e, f, g); - SHA256_ROUND(i + 2, g, h, a, b, c, d, e, f); - SHA256_ROUND(i + 3, f, g, h, a, b, c, d, e); - SHA256_ROUND(i + 4, e, f, g, h, a, b, c, d); - SHA256_ROUND(i + 5, d, e, f, g, h, a, b, c); - SHA256_ROUND(i + 6, c, d, e, f, g, h, a, b); - SHA256_ROUND(i + 7, b, c, d, e, f, g, h, a); - } - - state->h[0] +=3D a; - state->h[1] +=3D b; - state->h[2] +=3D c; - state->h[3] +=3D d; - state->h[4] +=3D e; - state->h[5] +=3D f; - state->h[6] +=3D g; - state->h[7] +=3D h; -} - -void sha256_blocks_generic(struct sha256_block_state *state, - const u8 *data, size_t nblocks) -{ - u32 W[64]; - - do { - sha256_block_generic(state, data, W); - data +=3D SHA256_BLOCK_SIZE; - } while (--nblocks); - - memzero_explicit(W, sizeof(W)); -} -EXPORT_SYMBOL_GPL(sha256_blocks_generic); - -MODULE_DESCRIPTION("SHA-256 Algorithm (generic implementation)"); -MODULE_LICENSE("GPL"); diff --git a/lib/crypto/sha256.c b/lib/crypto/sha256.c index 165c894a47aa0..0de49bf8e8b8b 100644 --- a/lib/crypto/sha256.c +++ b/lib/crypto/sha256.c @@ -4,18 +4,20 @@ * * Copyright (c) Jean-Luc Cooke * Copyright (c) Andrew McDonald * Copyright (c) 2002 James Morris * Copyright (c) 2014 Red Hat Inc. + * Copyright 2025 Google LLC */ =20 #include #include -#include +#include #include #include #include +#include #include =20 static const struct sha256_block_state sha224_iv =3D { .h =3D { SHA224_H0, SHA224_H1, SHA224_H2, SHA224_H3, @@ -28,30 +30,132 @@ static const struct sha256_block_state sha256_iv =3D { SHA256_H0, SHA256_H1, SHA256_H2, SHA256_H3, SHA256_H4, SHA256_H5, SHA256_H6, SHA256_H7, }, }; =20 -/* - * If __DISABLE_EXPORTS is defined, then this file is being compiled for a - * pre-boot environment. In that case, ignore the kconfig options, pull t= he - * generic code into the same translation unit, and use that only. - */ -#ifdef __DISABLE_EXPORTS -#include "sha256-generic.c" -#endif +static const u32 SHA256_K[] =3D { + 0x428a2f98, 0x71374491, 0xb5c0fbcf, 0xe9b5dba5, 0x3956c25b, 0x59f111f1, + 0x923f82a4, 0xab1c5ed5, 0xd807aa98, 0x12835b01, 0x243185be, 0x550c7dc3, + 0x72be5d74, 0x80deb1fe, 0x9bdc06a7, 0xc19bf174, 0xe49b69c1, 0xefbe4786, + 0x0fc19dc6, 0x240ca1cc, 0x2de92c6f, 0x4a7484aa, 0x5cb0a9dc, 0x76f988da, + 0x983e5152, 0xa831c66d, 0xb00327c8, 0xbf597fc7, 0xc6e00bf3, 0xd5a79147, + 0x06ca6351, 0x14292967, 0x27b70a85, 0x2e1b2138, 0x4d2c6dfc, 0x53380d13, + 0x650a7354, 0x766a0abb, 0x81c2c92e, 0x92722c85, 0xa2bfe8a1, 0xa81a664b, + 0xc24b8b70, 0xc76c51a3, 0xd192e819, 0xd6990624, 0xf40e3585, 0x106aa070, + 0x19a4c116, 0x1e376c08, 0x2748774c, 0x34b0bcb5, 0x391c0cb3, 0x4ed8aa4a, + 0x5b9cca4f, 0x682e6ff3, 0x748f82ee, 0x78a5636f, 0x84c87814, 0x8cc70208, + 0x90befffa, 0xa4506ceb, 0xbef9a3f7, 0xc67178f2, +}; + +#define Ch(x, y, z) ((z) ^ ((x) & ((y) ^ (z)))) +#define Maj(x, y, z) (((x) & (y)) | ((z) & ((x) | (y)))) +#define e0(x) (ror32(x, 2) ^ ror32(x, 13) ^ ror32(x, 22)) +#define e1(x) (ror32(x, 6) ^ ror32(x, 11) ^ ror32(x, 25)) +#define s0(x) (ror32(x, 7) ^ ror32(x, 18) ^ (x >> 3)) +#define s1(x) (ror32(x, 17) ^ ror32(x, 19) ^ (x >> 10)) + +static inline void LOAD_OP(int I, u32 *W, const u8 *input) +{ + W[I] =3D get_unaligned_be32((__u32 *)input + I); +} + +static inline void BLEND_OP(int I, u32 *W) +{ + W[I] =3D s1(W[I - 2]) + W[I - 7] + s0(W[I - 15]) + W[I - 16]; +} =20 -static inline bool sha256_purgatory(void) +#define SHA256_ROUND(i, a, b, c, d, e, f, g, h) \ + do { \ + u32 t1, t2; \ + t1 =3D h + e1(e) + Ch(e, f, g) + SHA256_K[i] + W[i]; \ + t2 =3D e0(a) + Maj(a, b, c); \ + d +=3D t1; \ + h =3D t1 + t2; \ + } while (0) + +static void sha256_block_generic(struct sha256_block_state *state, + const u8 *input, u32 W[64]) { - return __is_defined(__DISABLE_EXPORTS); + u32 a, b, c, d, e, f, g, h; + int i; + + /* load the input */ + for (i =3D 0; i < 16; i +=3D 8) { + LOAD_OP(i + 0, W, input); + LOAD_OP(i + 1, W, input); + LOAD_OP(i + 2, W, input); + LOAD_OP(i + 3, W, input); + LOAD_OP(i + 4, W, input); + LOAD_OP(i + 5, W, input); + LOAD_OP(i + 6, W, input); + LOAD_OP(i + 7, W, input); + } + + /* now blend */ + for (i =3D 16; i < 64; i +=3D 8) { + BLEND_OP(i + 0, W); + BLEND_OP(i + 1, W); + BLEND_OP(i + 2, W); + BLEND_OP(i + 3, W); + BLEND_OP(i + 4, W); + BLEND_OP(i + 5, W); + BLEND_OP(i + 6, W); + BLEND_OP(i + 7, W); + } + + /* load the state into our registers */ + a =3D state->h[0]; + b =3D state->h[1]; + c =3D state->h[2]; + d =3D state->h[3]; + e =3D state->h[4]; + f =3D state->h[5]; + g =3D state->h[6]; + h =3D state->h[7]; + + /* now iterate */ + for (i =3D 0; i < 64; i +=3D 8) { + SHA256_ROUND(i + 0, a, b, c, d, e, f, g, h); + SHA256_ROUND(i + 1, h, a, b, c, d, e, f, g); + SHA256_ROUND(i + 2, g, h, a, b, c, d, e, f); + SHA256_ROUND(i + 3, f, g, h, a, b, c, d, e); + SHA256_ROUND(i + 4, e, f, g, h, a, b, c, d); + SHA256_ROUND(i + 5, d, e, f, g, h, a, b, c); + SHA256_ROUND(i + 6, c, d, e, f, g, h, a, b); + SHA256_ROUND(i + 7, b, c, d, e, f, g, h, a); + } + + state->h[0] +=3D a; + state->h[1] +=3D b; + state->h[2] +=3D c; + state->h[3] +=3D d; + state->h[4] +=3D e; + state->h[5] +=3D f; + state->h[6] +=3D g; + state->h[7] +=3D h; } =20 -static inline void sha256_blocks(struct sha256_block_state *state, - const u8 *data, size_t nblocks) +static void __maybe_unused +sha256_blocks_generic(struct sha256_block_state *state, + const u8 *data, size_t nblocks) { - sha256_choose_blocks(state->h, data, nblocks, sha256_purgatory(), false); + u32 W[64]; + + do { + sha256_block_generic(state, data, W); + data +=3D SHA256_BLOCK_SIZE; + } while (--nblocks); + + memzero_explicit(W, sizeof(W)); } =20 +#if defined(CONFIG_CRYPTO_LIB_SHA256_ARCH) && !defined(__DISABLE_EXPORTS) +#include "sha256.h" /* $(SRCARCH)/sha256.h */ +#else +#define sha256_blocks sha256_blocks_generic +#endif + static void __sha256_init(struct __sha256_ctx *ctx, const struct sha256_block_state *iv, u64 initial_bytecount) { ctx->state =3D *iv; @@ -267,7 +371,21 @@ void hmac_sha256_usingrawkey(const u8 *raw_key, size_t= raw_key_len, =20 memzero_explicit(&key, sizeof(key)); } EXPORT_SYMBOL_GPL(hmac_sha256_usingrawkey); =20 +#ifdef sha256_mod_init_arch +static int __init sha256_mod_init(void) +{ + sha256_mod_init_arch(); + return 0; +} +subsys_initcall(sha256_mod_init); + +static void __exit sha256_mod_exit(void) +{ +} +module_exit(sha256_mod_exit); +#endif + MODULE_DESCRIPTION("SHA-224, SHA-256, HMAC-SHA224, and HMAC-SHA256 library= functions"); MODULE_LICENSE("GPL"); diff --git a/lib/crypto/sparc/Kconfig b/lib/crypto/sparc/Kconfig deleted file mode 100644 index e5c3e4d3dba62..0000000000000 --- a/lib/crypto/sparc/Kconfig +++ /dev/null @@ -1,8 +0,0 @@ -# SPDX-License-Identifier: GPL-2.0-only - -config CRYPTO_SHA256_SPARC64 - tristate - depends on SPARC64 - default CRYPTO_LIB_SHA256 - select CRYPTO_ARCH_HAVE_LIB_SHA256 - select CRYPTO_LIB_SHA256_GENERIC diff --git a/lib/crypto/sparc/Makefile b/lib/crypto/sparc/Makefile deleted file mode 100644 index 75ee244ad6f79..0000000000000 --- a/lib/crypto/sparc/Makefile +++ /dev/null @@ -1,4 +0,0 @@ -# SPDX-License-Identifier: GPL-2.0-only - -obj-$(CONFIG_CRYPTO_SHA256_SPARC64) +=3D sha256-sparc64.o -sha256-sparc64-y :=3D sha256.o sha256_asm.o diff --git a/lib/crypto/sparc/sha256.c b/lib/crypto/sparc/sha256.h similarity index 62% rename from lib/crypto/sparc/sha256.c rename to lib/crypto/sparc/sha256.h index f41c109c1c18d..1d10108eb1954 100644 --- a/lib/crypto/sparc/sha256.c +++ b/lib/crypto/sparc/sha256.h @@ -1,58 +1,43 @@ -// SPDX-License-Identifier: GPL-2.0-only +/* SPDX-License-Identifier: GPL-2.0-only */ /* * SHA-256 accelerated using the sparc64 sha256 opcodes * * Copyright (c) Jean-Luc Cooke * Copyright (c) Andrew McDonald * Copyright (c) 2002 James Morris * SHA224 Support Copyright 2007 Intel Corporation */ =20 -#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt - #include #include #include -#include -#include -#include =20 static __ro_after_init DEFINE_STATIC_KEY_FALSE(have_sha256_opcodes); =20 asmlinkage void sha256_sparc64_transform(struct sha256_block_state *state, const u8 *data, size_t nblocks); =20 -void sha256_blocks_arch(struct sha256_block_state *state, - const u8 *data, size_t nblocks) +static void sha256_blocks(struct sha256_block_state *state, + const u8 *data, size_t nblocks) { if (static_branch_likely(&have_sha256_opcodes)) sha256_sparc64_transform(state, data, nblocks); else sha256_blocks_generic(state, data, nblocks); } -EXPORT_SYMBOL_GPL(sha256_blocks_arch); =20 -static int __init sha256_sparc64_mod_init(void) +#define sha256_mod_init_arch sha256_mod_init_arch +static inline void sha256_mod_init_arch(void) { unsigned long cfr; =20 if (!(sparc64_elf_hwcap & HWCAP_SPARC_CRYPTO)) - return 0; + return; =20 __asm__ __volatile__("rd %%asr26, %0" : "=3Dr" (cfr)); if (!(cfr & CFR_SHA256)) - return 0; + return; =20 static_branch_enable(&have_sha256_opcodes); pr_info("Using sparc64 sha256 opcode optimized SHA-256/SHA-224 implementa= tion\n"); - return 0; } -subsys_initcall(sha256_sparc64_mod_init); - -static void __exit sha256_sparc64_mod_exit(void) -{ -} -module_exit(sha256_sparc64_mod_exit); - -MODULE_LICENSE("GPL"); -MODULE_DESCRIPTION("SHA-256 accelerated using the sparc64 sha256 opcodes"); diff --git a/lib/crypto/x86/Kconfig b/lib/crypto/x86/Kconfig index e344579db3d85..546fe2afe0b51 100644 --- a/lib/crypto/x86/Kconfig +++ b/lib/crypto/x86/Kconfig @@ -22,12 +22,5 @@ config CRYPTO_CHACHA20_X86_64 config CRYPTO_POLY1305_X86_64 tristate depends on 64BIT default CRYPTO_LIB_POLY1305 select CRYPTO_ARCH_HAVE_LIB_POLY1305 - -config CRYPTO_SHA256_X86_64 - tristate - depends on 64BIT - default CRYPTO_LIB_SHA256 - select CRYPTO_ARCH_HAVE_LIB_SHA256 - select CRYPTO_LIB_SHA256_GENERIC diff --git a/lib/crypto/x86/Makefile b/lib/crypto/x86/Makefile index abceca3d31c01..c2ff8c5f1046e 100644 --- a/lib/crypto/x86/Makefile +++ b/lib/crypto/x86/Makefile @@ -8,13 +8,10 @@ chacha-x86_64-y :=3D chacha-avx2-x86_64.o chacha-ssse3-x8= 6_64.o chacha-avx512vl-x8 =20 obj-$(CONFIG_CRYPTO_POLY1305_X86_64) +=3D poly1305-x86_64.o poly1305-x86_64-y :=3D poly1305-x86_64-cryptogams.o poly1305_glue.o targets +=3D poly1305-x86_64-cryptogams.S =20 -obj-$(CONFIG_CRYPTO_SHA256_X86_64) +=3D sha256-x86_64.o -sha256-x86_64-y :=3D sha256.o sha256-ssse3-asm.o sha256-avx-asm.o sha256-a= vx2-asm.o sha256-ni-asm.o - quiet_cmd_perlasm =3D PERLASM $@ cmd_perlasm =3D $(PERL) $< > $@ =20 $(obj)/%.S: $(src)/%.pl FORCE $(call if_changed,perlasm) diff --git a/lib/crypto/x86/sha256.c b/lib/crypto/x86/sha256.h similarity index 74% rename from lib/crypto/x86/sha256.c rename to lib/crypto/x86/sha256.h index 9ee38d2b3d572..2bad50f3ef279 100644 --- a/lib/crypto/x86/sha256.c +++ b/lib/crypto/x86/sha256.h @@ -1,16 +1,13 @@ -// SPDX-License-Identifier: GPL-2.0-or-later +/* SPDX-License-Identifier: GPL-2.0-or-later */ /* * SHA-256 optimized for x86_64 * * Copyright 2025 Google LLC */ #include -#include #include -#include -#include #include =20 asmlinkage void sha256_transform_ssse3(struct sha256_block_state *state, const u8 *data, size_t nblocks); asmlinkage void sha256_transform_avx(struct sha256_block_state *state, @@ -22,24 +19,24 @@ asmlinkage void sha256_ni_transform(struct sha256_block= _state *state, =20 static __ro_after_init DEFINE_STATIC_KEY_FALSE(have_sha256_x86); =20 DEFINE_STATIC_CALL(sha256_blocks_x86, sha256_transform_ssse3); =20 -void sha256_blocks_arch(struct sha256_block_state *state, - const u8 *data, size_t nblocks) +static void sha256_blocks(struct sha256_block_state *state, + const u8 *data, size_t nblocks) { if (static_branch_likely(&have_sha256_x86) && crypto_simd_usable()) { kernel_fpu_begin(); static_call(sha256_blocks_x86)(state, data, nblocks); kernel_fpu_end(); } else { sha256_blocks_generic(state, data, nblocks); } } -EXPORT_SYMBOL_GPL(sha256_blocks_arch); =20 -static int __init sha256_x86_mod_init(void) +#define sha256_mod_init_arch sha256_mod_init_arch +static inline void sha256_mod_init_arch(void) { if (boot_cpu_has(X86_FEATURE_SHA_NI)) { static_call_update(sha256_blocks_x86, sha256_ni_transform); } else if (cpu_has_xfeatures(XFEATURE_MASK_SSE | XFEATURE_MASK_YMM, NULL) && @@ -50,19 +47,9 @@ static int __init sha256_x86_mod_init(void) sha256_transform_rorx); else static_call_update(sha256_blocks_x86, sha256_transform_avx); } else if (!boot_cpu_has(X86_FEATURE_SSSE3)) { - return 0; + return; } static_branch_enable(&have_sha256_x86); - return 0; } -subsys_initcall(sha256_x86_mod_init); - -static void __exit sha256_x86_mod_exit(void) -{ -} -module_exit(sha256_x86_mod_exit); - -MODULE_LICENSE("GPL"); -MODULE_DESCRIPTION("SHA-256 optimized for x86_64"); --=20 2.50.0 From nobody Sun Sep 7 11:31:36 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 576112264BA; Wed, 25 Jun 2025 07:10:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750835436; cv=none; b=qFWhXB/IsvntcnkE6NmnEO68jqCh7pmuI4ztbuK0SP9VMCZytrN1QTfxrfku4WANikc+moWDphiLm4BzBTyNBUbNlT1vXpqc4098jeb3+JW7TmyG/FeHe+or5uTe45S3lnfy8Y7qf3Ovv0lKhhj1UuKKsivgnfwFBc3rUk0Za98= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750835436; c=relaxed/simple; bh=Y+OM4xhFGnbbVJB2/pk7w3PVA+sL8KWZ8Ahn07y2y1A=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=L0dCymTNOtzycKi51/FhbJFNUR8a9wsr0F/qNxoknrIh/I9vIUnCrV+03F+ZYP3HTBJ58KQX9OMyV6gJf6O+w/SNPTd0nYl08SibQ0lJ9ujrVVn5YrdTz6ObPVBUz2/cmJtM3/RS1HS4UciNSqVAWSuvyJGihB8SiUy7j/odaZg= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=bH9JuMjc; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="bH9JuMjc" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0EFCAC4CEF2; Wed, 25 Jun 2025 07:10:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1750835435; bh=Y+OM4xhFGnbbVJB2/pk7w3PVA+sL8KWZ8Ahn07y2y1A=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=bH9JuMjcvx3J7GtUczSjT11nGI4YWbo07MYlZLzC2HhSWEMCp2w/3F76EUQDdl9aH x3U0E8JYWVjle7diqxhGtq75vEUoC36KAdmOt5ehtup8q9m0RB23nlpgatFdW+xPro JnvpTRlkc04oJMwRgnosZlVo8rkhPtz1xEyaez2xP71hb1UjwpZwEHoEgz9hwvZZ8w Xg73giZDP2UVojmC6ER8rObGC/udaKqdU8qwkmAQlqBDk2z567b670tXgzD5vy92Bv LTPxk9gDYjyHb92F+u8ONsV4Eu9yHAg8GfgBAM3Y1PeC6Az6HX0fnNqfEvvqWnzfdm PfOgccGBMYwtA== From: Eric Biggers To: linux-crypto@vger.kernel.org Cc: linux-kernel@vger.kernel.org, Ard Biesheuvel , "Jason A . Donenfeld" , linux-arm-kernel@lists.infradead.org, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, sparclinux@vger.kernel.org, x86@kernel.org, Eric Biggers Subject: [PATCH 17/18] lib/crypto: sha256: Sync sha256_update() with sha512_update() Date: Wed, 25 Jun 2025 00:08:18 -0700 Message-ID: <20250625070819.1496119-18-ebiggers@kernel.org> X-Mailer: git-send-email 2.50.0 In-Reply-To: <20250625070819.1496119-1-ebiggers@kernel.org> References: <20250625070819.1496119-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The BLOCK_HASH_UPDATE_BLOCKS macro is difficult to read. For now, let's just write the update explicitly in the straightforward way, mirroring sha512_update(). It's possible that we'll bring back a macro for this later, but it needs to be properly justified and hopefully a bit more readable. Signed-off-by: Eric Biggers --- lib/crypto/sha256.c | 28 +++++++++++++++++++++++++--- 1 file changed, 25 insertions(+), 3 deletions(-) diff --git a/lib/crypto/sha256.c b/lib/crypto/sha256.c index 0de49bf8e8b8b..c93bf4699160c 100644 --- a/lib/crypto/sha256.c +++ b/lib/crypto/sha256.c @@ -8,11 +8,10 @@ * Copyright (c) 2014 Red Hat Inc. * Copyright 2025 Google LLC */ =20 #include -#include #include #include #include #include #include @@ -177,12 +176,35 @@ EXPORT_SYMBOL_GPL(sha256_init); void __sha256_update(struct __sha256_ctx *ctx, const u8 *data, size_t len) { size_t partial =3D ctx->bytecount % SHA256_BLOCK_SIZE; =20 ctx->bytecount +=3D len; - BLOCK_HASH_UPDATE_BLOCKS(sha256_blocks, &ctx->state, data, len, - SHA256_BLOCK_SIZE, ctx->buf, partial); + + if (partial + len >=3D SHA256_BLOCK_SIZE) { + size_t nblocks; + + if (partial) { + size_t l =3D SHA256_BLOCK_SIZE - partial; + + memcpy(&ctx->buf[partial], data, l); + data +=3D l; + len -=3D l; + + sha256_blocks(&ctx->state, ctx->buf, 1); + } + + nblocks =3D len / SHA256_BLOCK_SIZE; + len %=3D SHA256_BLOCK_SIZE; + + if (nblocks) { + sha256_blocks(&ctx->state, data, nblocks); + data +=3D nblocks * SHA256_BLOCK_SIZE; + } + partial =3D 0; + } + if (len) + memcpy(&ctx->buf[partial], data, len); } EXPORT_SYMBOL(__sha256_update); =20 static void __sha256_final(struct __sha256_ctx *ctx, u8 *out, size_t digest_size) --=20 2.50.0 From nobody Sun Sep 7 11:31:36 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 575AF2264B6; Wed, 25 Jun 2025 07:10:35 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750835436; cv=none; b=sZjAGmLJeNuCl3EWqsFjwgfirVM61ka4LVYlRpdhujkZ+qI5un/i5XOnAtOzP6Jjkm89ijN1lioAiojmoiAJfgAHJ7a4Wl2nrborDvkiguZ2zUXAiD3B7h6vQBUER4AAK6SqCNDr39a62NvK1zzLV+cex5fWrVqL9/WgKCguhQg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1750835436; c=relaxed/simple; bh=pkoFPfsdz5mUfc5I4FK+MnyFLsCjGsnnD9jc5ME9fN4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=g7x+Ur/Jf/siwukPDo7pjM2MZ3zdyJmPfCSNgCZ0Uo2E5QNo2zrG+sPyfTeLhDNAxQeZRHuWeEN8XC0CW/53vn7Rl3qtb0zXGRn9VsB8mNVi6DQzOPgbvKoeyZReYqkkdmuVo4q3LE0Jwe/5/Vl6GHQdlqT5RpSf85TsUBbQo+U= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=eZmKP/fI; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="eZmKP/fI" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 850B0C4CEF1; Wed, 25 Jun 2025 07:10:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1750835435; bh=pkoFPfsdz5mUfc5I4FK+MnyFLsCjGsnnD9jc5ME9fN4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=eZmKP/fIDXW6rkeKR6Thsj7qhzrxUo7+anqcEdA8gXs47TnZN6DL2kX7bOmniF+Af +piPAyua5mV1/j2iaeJwcTEQSZit4PSpaEuHxsuX3Gh7qbJL1/S6+cmg48nAkVE0dN ZyGYkK4KNRVoHZ8kRyXFP+D5WRT1bmxJwqycb3x7/rHw4Si1XjQgAf3mJF4Ppbc3st vVZqoNdVE/MzcDZ1B21vwKhKKnzwJYGxof7gMCU3EX3I9Jd+xgPmhyMAFfRtKRsS7F 8ucsuq7Ii9jB5dRSQBWrDMV+5Y5BLLHL58mBqSOsHYMJPjSbhmNrrHgtHdtCEEmFJL s0DeneNL1I14g== From: Eric Biggers To: linux-crypto@vger.kernel.org Cc: linux-kernel@vger.kernel.org, Ard Biesheuvel , "Jason A . Donenfeld" , linux-arm-kernel@lists.infradead.org, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, sparclinux@vger.kernel.org, x86@kernel.org, Eric Biggers Subject: [PATCH 18/18] lib/crypto: sha256: Document the SHA-224 and SHA-256 API Date: Wed, 25 Jun 2025 00:08:19 -0700 Message-ID: <20250625070819.1496119-19-ebiggers@kernel.org> X-Mailer: git-send-email 2.50.0 In-Reply-To: <20250625070819.1496119-1-ebiggers@kernel.org> References: <20250625070819.1496119-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add kerneldoc comments, consistent with the kerneldoc comments of the SHA-384 and SHA-512 API. Signed-off-by: Eric Biggers --- include/crypto/sha2.h | 76 +++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 76 insertions(+) diff --git a/include/crypto/sha2.h b/include/crypto/sha2.h index 2e3fc2cf4aa0d..e0a08f6addd00 100644 --- a/include/crypto/sha2.h +++ b/include/crypto/sha2.h @@ -153,17 +153,55 @@ void __hmac_sha256_init(struct __hmac_sha256_ctx *ctx, */ struct sha224_ctx { struct __sha256_ctx ctx; }; =20 +/** + * sha224_init() - Initialize a SHA-224 context for a new message + * @ctx: the context to initialize + * + * If you don't need incremental computation, consider sha224() instead. + * + * Context: Any context. + */ void sha224_init(struct sha224_ctx *ctx); + +/** + * sha224_update() - Update a SHA-224 context with message data + * @ctx: the context to update; must have been initialized + * @data: the message data + * @len: the data length in bytes + * + * This can be called any number of times. + * + * Context: Any context. + */ static inline void sha224_update(struct sha224_ctx *ctx, const u8 *data, size_t len) { __sha256_update(&ctx->ctx, data, len); } + +/** + * sha224_final() - Finish computing a SHA-224 message digest + * @ctx: the context to finalize; must have been initialized + * @out: (output) the resulting SHA-224 message digest + * + * After finishing, this zeroizes @ctx. So the caller does not need to do= it. + * + * Context: Any context. + */ void sha224_final(struct sha224_ctx *ctx, u8 out[SHA224_DIGEST_SIZE]); + +/** + * sha224() - Compute SHA-224 message digest in one shot + * @data: the message data + * @len: the data length in bytes + * @out: (output) the resulting SHA-224 message digest + * + * Context: Any context. + */ void sha224(const u8 *data, size_t len, u8 out[SHA224_DIGEST_SIZE]); =20 /** * struct hmac_sha224_key - Prepared key for HMAC-SHA224 * @key: private @@ -273,17 +311,55 @@ void hmac_sha224_usingrawkey(const u8 *raw_key, size_= t raw_key_len, */ struct sha256_ctx { struct __sha256_ctx ctx; }; =20 +/** + * sha256_init() - Initialize a SHA-256 context for a new message + * @ctx: the context to initialize + * + * If you don't need incremental computation, consider sha256() instead. + * + * Context: Any context. + */ void sha256_init(struct sha256_ctx *ctx); + +/** + * sha256_update() - Update a SHA-256 context with message data + * @ctx: the context to update; must have been initialized + * @data: the message data + * @len: the data length in bytes + * + * This can be called any number of times. + * + * Context: Any context. + */ static inline void sha256_update(struct sha256_ctx *ctx, const u8 *data, size_t len) { __sha256_update(&ctx->ctx, data, len); } + +/** + * sha256_final() - Finish computing a SHA-256 message digest + * @ctx: the context to finalize; must have been initialized + * @out: (output) the resulting SHA-256 message digest + * + * After finishing, this zeroizes @ctx. So the caller does not need to do= it. + * + * Context: Any context. + */ void sha256_final(struct sha256_ctx *ctx, u8 out[SHA256_DIGEST_SIZE]); + +/** + * sha256() - Compute SHA-256 message digest in one shot + * @data: the message data + * @len: the data length in bytes + * @out: (output) the resulting SHA-256 message digest + * + * Context: Any context. + */ void sha256(const u8 *data, size_t len, u8 out[SHA256_DIGEST_SIZE]); =20 /** * struct hmac_sha256_key - Prepared key for HMAC-SHA256 * @key: private --=20 2.50.0