From nobody Sun Sep 7 11:34:13 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 622C12980D4; Mon, 30 Jun 2025 16:09:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751299762; cv=none; b=j5TjvBJuOu6L25kFZzeETafXfQ2j67WUE/SiP1qlw18zv1xZPL28zBfm5RXKWlu47+lOAabSBhBbnuwMKwcRFODbbmvPwlLtqSZABFBZ8g9F5gYBPmL+Q7g2ntMpgoMJBdHzO0xeyqQAkOA05tLQ71N1watFIwcB607juudwg6U= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751299762; c=relaxed/simple; bh=oP/HTNYTl9LXpDk4PHssqzdxrIaWg9Dc4X7a2efoHe0=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=KFEemeWFbxqr3G0UK61+l7g85QryTYSHiXEirUp8Wh5qCsHgInqluhUwfRwsWrfgjClFEg4OyxS69Sz9Iu8ll9p0BBCX8P1MFoseiW8jJjfLztpwVQKdhXJSxBDNtA6DikWvPCU2pGAiE6g6/YjvNCOMMKqwY+G2UMJ70hoIbug= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=i1sDTgEL; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="i1sDTgEL" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9BAC1C4CEF1; Mon, 30 Jun 2025 16:09:20 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1751299761; bh=oP/HTNYTl9LXpDk4PHssqzdxrIaWg9Dc4X7a2efoHe0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=i1sDTgELYRSRNAFZVOP1NxIWH+kNYmHvowh3XGVDACcMSCP75QQyHveAF+wOTzgst bFgOJEykz/D+oYZczddK0gK0qHnZKTS3SdS+6yNsW99rAfIg7ExwcjIc629lG8uUzP 6RXbIgB0Lnq7FEpxdAwuJhnZsIxN1JnlOiPjIAhce93Tzmp0yCX5ze/tBjdO9hEM93 5It48cSTjDq/iAFNIdf91pghNIg9JiVU9THkutS5boFEIvhgmKf0YpmHF2ztLN5r8o j+IAjk5UtyQtYisGd3wBItTS2GoyKfYlW/lLBDrXq+RrsTLZZMh9UJ5AHTpPPhc3H7 WnO7/m1FYmYwA== From: Eric Biggers To: linux-crypto@vger.kernel.org Cc: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, sparclinux@vger.kernel.org, x86@kernel.org, Ard Biesheuvel , "Jason A . Donenfeld" , Eric Biggers Subject: [PATCH v2 01/14] libceph: Rename hmac_sha256() to ceph_hmac_sha256() Date: Mon, 30 Jun 2025 09:06:32 -0700 Message-ID: <20250630160645.3198-2-ebiggers@kernel.org> X-Mailer: git-send-email 2.50.0 In-Reply-To: <20250630160645.3198-1-ebiggers@kernel.org> References: <20250630160645.3198-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Rename hmac_sha256() to ceph_hmac_sha256(), to avoid a naming conflict with the upcoming hmac_sha256() library function. This code will be able to use the HMAC-SHA256 library, but that's left for a later commit. Signed-off-by: Eric Biggers Acked-by: Ard Biesheuvel --- net/ceph/messenger_v2.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/net/ceph/messenger_v2.c b/net/ceph/messenger_v2.c index bd608ffa06279..5483b4eed94e1 100644 --- a/net/ceph/messenger_v2.c +++ b/net/ceph/messenger_v2.c @@ -791,12 +791,12 @@ static int setup_crypto(struct ceph_connection *con, con_secret + CEPH_GCM_KEY_LEN + CEPH_GCM_IV_LEN, CEPH_GCM_IV_LEN); return 0; /* auth_x, secure mode */ } =20 -static int hmac_sha256(struct ceph_connection *con, const struct kvec *kve= cs, - int kvec_cnt, u8 *hmac) +static int ceph_hmac_sha256(struct ceph_connection *con, + const struct kvec *kvecs, int kvec_cnt, u8 *hmac) { SHASH_DESC_ON_STACK(desc, con->v2.hmac_tfm); /* tfm arg is ignored */ int ret; int i; =20 @@ -1460,12 +1460,12 @@ static int prepare_auth_signature(struct ceph_conne= ction *con) buf =3D alloc_conn_buf(con, head_onwire_len(SHA256_DIGEST_SIZE, con_secure(con))); if (!buf) return -ENOMEM; =20 - ret =3D hmac_sha256(con, con->v2.in_sign_kvecs, con->v2.in_sign_kvec_cnt, - CTRL_BODY(buf)); + ret =3D ceph_hmac_sha256(con, con->v2.in_sign_kvecs, + con->v2.in_sign_kvec_cnt, CTRL_BODY(buf)); if (ret) return ret; =20 return prepare_control(con, FRAME_TAG_AUTH_SIGNATURE, buf, SHA256_DIGEST_SIZE); @@ -2458,12 +2458,12 @@ static int process_auth_signature(struct ceph_conne= ction *con, if (con->state !=3D CEPH_CON_S_V2_AUTH_SIGNATURE) { con->error_msg =3D "protocol error, unexpected auth_signature"; return -EINVAL; } =20 - ret =3D hmac_sha256(con, con->v2.out_sign_kvecs, - con->v2.out_sign_kvec_cnt, hmac); + ret =3D ceph_hmac_sha256(con, con->v2.out_sign_kvecs, + con->v2.out_sign_kvec_cnt, hmac); if (ret) return ret; =20 ceph_decode_need(&p, end, SHA256_DIGEST_SIZE, bad); if (crypto_memneq(p, hmac, SHA256_DIGEST_SIZE)) { --=20 2.50.0 From nobody Sun Sep 7 11:34:13 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id A030D295D90; Mon, 30 Jun 2025 16:09:21 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751299761; cv=none; b=EMwPbkoD5WChCVAldS2hUeEumHIAH5aPYaQUaAQiST0GVuGZC5VKuSL1H0CkvdtY3epoZkaWDKRSc919h6jm7MpuBhqGdIS6bsbZC5GKKP/FhrK8pujQbkvMWlTdCxhY1oxrWv6SyV0VuTvONmZlwMJtO+bUf5ZcR8WsPJ7IYsw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751299761; c=relaxed/simple; bh=UROoErFHCYCaDPqAxDhc4AhSGoASaVjnImGlpaYhSWw=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=qZqre/YjqWikwrO0HuWQu8na2/llozlmjdLRV8XI+wIZzoecWf3A1Y9PMmzhiHdkmOtohmgWub11l5BeFpFWklgvd4H7IOZAjv60UGHnPXzXT+QR6KHu8xd8u6FVyiYaWIbp4MGc5Q8Rz4UI2jtkfCI67n2kejWATc4W1fboZA0= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=lFhWefsH; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="lFhWefsH" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 258CAC4CEF3; Mon, 30 Jun 2025 16:09:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1751299761; bh=UROoErFHCYCaDPqAxDhc4AhSGoASaVjnImGlpaYhSWw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=lFhWefsH1fRXhIkYKGeZupotUIuzQK0mHzeRtMkQRgdXPsGlkhpPRKQobnQvGKuGT 7PnPapOh93KHn6MH4/WuOrwHL2xVOcpK28cyBs6+yH63i0K+m/eh3SvOmAcFeq4kxI ytKgB/RYasfDuHhowfiLEeDwE87VFQr62Fm8GOL/0SHcZODK/rDekuBg57dxIvGfRV nhfM4qJVirbVxiqe2/U+pyUjo6po/3IlrnKZeXL/oo5RjjQ94CZbiYjI3Vrm6Q8Qev NQVMh2AOzs2cg7eK1PcomkfvEIHncEcPRawm+yQBRlbaqBJ+ws+8cn9ct24/SMfzFx rTjzEXS2tfbtQ== From: Eric Biggers To: linux-crypto@vger.kernel.org Cc: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, sparclinux@vger.kernel.org, x86@kernel.org, Ard Biesheuvel , "Jason A . Donenfeld" , Eric Biggers Subject: [PATCH v2 02/14] cxl/test: Simplify fw_buf_checksum_show() Date: Mon, 30 Jun 2025 09:06:33 -0700 Message-ID: <20250630160645.3198-3-ebiggers@kernel.org> X-Mailer: git-send-email 2.50.0 In-Reply-To: <20250630160645.3198-1-ebiggers@kernel.org> References: <20250630160645.3198-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" First, just use sha256() instead of a sequence of sha256_init(), sha256_update(), and sha256_final(). The result is the same. Second, use *phN instead of open-coding the conversion of bytes to hex. Signed-off-by: Eric Biggers Acked-by: Ard Biesheuvel --- tools/testing/cxl/test/mem.c | 21 ++------------------- 1 file changed, 2 insertions(+), 19 deletions(-) diff --git a/tools/testing/cxl/test/mem.c b/tools/testing/cxl/test/mem.c index 0f1d91f57ba34..d533481672b78 100644 --- a/tools/testing/cxl/test/mem.c +++ b/tools/testing/cxl/test/mem.c @@ -1826,31 +1826,14 @@ static DEVICE_ATTR_RW(security_lock); static ssize_t fw_buf_checksum_show(struct device *dev, struct device_attribute *attr, char *buf) { struct cxl_mockmem_data *mdata =3D dev_get_drvdata(dev); u8 hash[SHA256_DIGEST_SIZE]; - unsigned char *hstr, *hptr; - struct sha256_state sctx; - ssize_t written =3D 0; - int i; - - sha256_init(&sctx); - sha256_update(&sctx, mdata->fw, mdata->fw_size); - sha256_final(&sctx, hash); - - hstr =3D kzalloc((SHA256_DIGEST_SIZE * 2) + 1, GFP_KERNEL); - if (!hstr) - return -ENOMEM; - - hptr =3D hstr; - for (i =3D 0; i < SHA256_DIGEST_SIZE; i++) - hptr +=3D sprintf(hptr, "%02x", hash[i]); =20 - written =3D sysfs_emit(buf, "%s\n", hstr); + sha256(mdata->fw, mdata->fw_size, hash); =20 - kfree(hstr); - return written; + return sysfs_emit(buf, "%*phN\n", SHA256_DIGEST_SIZE, hash); } =20 static DEVICE_ATTR_RO(fw_buf_checksum); =20 static ssize_t sanitize_timeout_show(struct device *dev, --=20 2.50.0 From nobody Sun Sep 7 11:34:13 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 29EC1295DBD; Mon, 30 Jun 2025 16:09:22 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751299762; cv=none; b=KnSwTXfOylC96kP9SzozQXSiYFsv57ffIproZ8/xkopneg/9TxSo4gbdgrGXRI6AMM+jCJOSdTbCLr21NuNiQKwr5Sej8B9+SN/fcjm3MjZdWpG0MUCIea9B3XUMjiKjwyQxJfb3UYUDUEs5ILvAAJgE5UxDjD57sCNkPL50w38= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751299762; c=relaxed/simple; bh=5WUT3FVFtGYY38nGQk9FGEw/eHR0uyP40u2JgCeMrws=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Cnx7TO6y6aJiRQy7ko+9jMc0QYKdhNwP+ald9xUm7u+H4yOjgX+ehjGDRXal+jS+n9A5t6T1wHhlymO4d3d0yRBpkd0WkQkWRsoWz1XE7E2/GQzP3vOd7LX/gnn3h6DEeLk1NkZp9g1/qFs57qsxFxoCiQZ2BiW2o1lYJF1p9hM= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=e+N+Rp7s; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="e+N+Rp7s" Received: by smtp.kernel.org (Postfix) with ESMTPSA id A35A7C4CEF5; Mon, 30 Jun 2025 16:09:21 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1751299762; bh=5WUT3FVFtGYY38nGQk9FGEw/eHR0uyP40u2JgCeMrws=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=e+N+Rp7skA/eq8mkB7Jny8AcWCh4HRKnMJmSFz+cvafpdyHVdMjSt93EsD54zMvMs /WyIqcz+TKrriW94MpM4zGQLw28AMAEVLamI7OUOzNdCwAx3uld43AU3LB0bXYQaIL Ysygpb5KwKvQP7ez1n7hmFLHtqPW5XGEZUJXWUjJC9cc94Y1TNGXVe+6j/bItAqLGD qYieA8QBrcCXt1v+n5nzgDRTzRRqNgaYFws411PqpqpH5xwUSLN2oWBAHvhRDWfsWF uMyxhU4KDrgRJZRyK+SkqvDJ+p6MHQl6jFOoW7w7ndpIfaimBkuREJiy4Uux35Q1I6 BdnmQUWL/SNCQ== From: Eric Biggers To: linux-crypto@vger.kernel.org Cc: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, sparclinux@vger.kernel.org, x86@kernel.org, Ard Biesheuvel , "Jason A . Donenfeld" , Eric Biggers Subject: [PATCH v2 03/14] lib/crypto: sha256: Reorder some code Date: Mon, 30 Jun 2025 09:06:34 -0700 Message-ID: <20250630160645.3198-4-ebiggers@kernel.org> X-Mailer: git-send-email 2.50.0 In-Reply-To: <20250630160645.3198-1-ebiggers@kernel.org> References: <20250630160645.3198-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" First, move the declarations of sha224_init/update/final to be just above the corresponding SHA-256 code, matching the order that I used for SHA-384 and SHA-512. In sha2.h, the end result is that SHA-224, SHA-256, SHA-384, and SHA-512 are all in the logical order. Second, move sha224_block_init() and sha256_block_init() to be just below crypto_sha256_state. In later changes, these functions as well as struct crypto_sha256_state will no longer be used by the library functions. They'll remain just for some legacy offload drivers. This gets them into a logical place in the file for that. No code changes other than reordering. Signed-off-by: Eric Biggers Acked-by: Ard Biesheuvel --- include/crypto/sha2.h | 60 +++++++++++++++++++++---------------------- lib/crypto/sha256.c | 12 ++++----- 2 files changed, 36 insertions(+), 36 deletions(-) diff --git a/include/crypto/sha2.h b/include/crypto/sha2.h index 296ce9d468bfc..bb181b7996cdc 100644 --- a/include/crypto/sha2.h +++ b/include/crypto/sha2.h @@ -69,10 +69,36 @@ extern const u8 sha512_zero_message_hash[SHA512_DIGEST_= SIZE]; struct crypto_sha256_state { u32 state[SHA256_STATE_WORDS]; u64 count; }; =20 +static inline void sha224_block_init(struct crypto_sha256_state *sctx) +{ + sctx->state[0] =3D SHA224_H0; + sctx->state[1] =3D SHA224_H1; + sctx->state[2] =3D SHA224_H2; + sctx->state[3] =3D SHA224_H3; + sctx->state[4] =3D SHA224_H4; + sctx->state[5] =3D SHA224_H5; + sctx->state[6] =3D SHA224_H6; + sctx->state[7] =3D SHA224_H7; + sctx->count =3D 0; +} + +static inline void sha256_block_init(struct crypto_sha256_state *sctx) +{ + sctx->state[0] =3D SHA256_H0; + sctx->state[1] =3D SHA256_H1; + sctx->state[2] =3D SHA256_H2; + sctx->state[3] =3D SHA256_H3; + sctx->state[4] =3D SHA256_H4; + sctx->state[5] =3D SHA256_H5; + sctx->state[6] =3D SHA256_H6; + sctx->state[7] =3D SHA256_H7; + sctx->count =3D 0; +} + struct sha256_state { union { struct crypto_sha256_state ctx; struct { u32 state[SHA256_STATE_WORDS]; @@ -86,51 +112,25 @@ struct sha512_state { u64 state[SHA512_DIGEST_SIZE / 8]; u64 count[2]; u8 buf[SHA512_BLOCK_SIZE]; }; =20 -static inline void sha256_block_init(struct crypto_sha256_state *sctx) +static inline void sha224_init(struct sha256_state *sctx) { - sctx->state[0] =3D SHA256_H0; - sctx->state[1] =3D SHA256_H1; - sctx->state[2] =3D SHA256_H2; - sctx->state[3] =3D SHA256_H3; - sctx->state[4] =3D SHA256_H4; - sctx->state[5] =3D SHA256_H5; - sctx->state[6] =3D SHA256_H6; - sctx->state[7] =3D SHA256_H7; - sctx->count =3D 0; + sha224_block_init(&sctx->ctx); } +/* Simply use sha256_update as it is equivalent to sha224_update. */ +void sha224_final(struct sha256_state *sctx, u8 out[SHA224_DIGEST_SIZE]); =20 static inline void sha256_init(struct sha256_state *sctx) { sha256_block_init(&sctx->ctx); } void sha256_update(struct sha256_state *sctx, const u8 *data, size_t len); void sha256_final(struct sha256_state *sctx, u8 out[SHA256_DIGEST_SIZE]); void sha256(const u8 *data, size_t len, u8 out[SHA256_DIGEST_SIZE]); =20 -static inline void sha224_block_init(struct crypto_sha256_state *sctx) -{ - sctx->state[0] =3D SHA224_H0; - sctx->state[1] =3D SHA224_H1; - sctx->state[2] =3D SHA224_H2; - sctx->state[3] =3D SHA224_H3; - sctx->state[4] =3D SHA224_H4; - sctx->state[5] =3D SHA224_H5; - sctx->state[6] =3D SHA224_H6; - sctx->state[7] =3D SHA224_H7; - sctx->count =3D 0; -} - -static inline void sha224_init(struct sha256_state *sctx) -{ - sha224_block_init(&sctx->ctx); -} -/* Simply use sha256_update as it is equivalent to sha224_update. */ -void sha224_final(struct sha256_state *sctx, u8 out[SHA224_DIGEST_SIZE]); - /* State for the SHA-512 (and SHA-384) compression function */ struct sha512_block_state { u64 h[8]; }; =20 diff --git a/lib/crypto/sha256.c b/lib/crypto/sha256.c index 6bfa4ae8dfb59..573ccecbf48bf 100644 --- a/lib/crypto/sha256.c +++ b/lib/crypto/sha256.c @@ -56,22 +56,22 @@ static inline void __sha256_final(struct sha256_state *= sctx, u8 *out, sha256_finup(&sctx->ctx, sctx->buf, partial, out, digest_size, sha256_purgatory(), false); memzero_explicit(sctx, sizeof(*sctx)); } =20 -void sha256_final(struct sha256_state *sctx, u8 out[SHA256_DIGEST_SIZE]) -{ - __sha256_final(sctx, out, SHA256_DIGEST_SIZE); -} -EXPORT_SYMBOL(sha256_final); - void sha224_final(struct sha256_state *sctx, u8 out[SHA224_DIGEST_SIZE]) { __sha256_final(sctx, out, SHA224_DIGEST_SIZE); } EXPORT_SYMBOL(sha224_final); =20 +void sha256_final(struct sha256_state *sctx, u8 out[SHA256_DIGEST_SIZE]) +{ + __sha256_final(sctx, out, SHA256_DIGEST_SIZE); +} +EXPORT_SYMBOL(sha256_final); + void sha256(const u8 *data, size_t len, u8 out[SHA256_DIGEST_SIZE]) { struct sha256_state sctx; =20 sha256_init(&sctx); --=20 2.50.0 From nobody Sun Sep 7 11:34:13 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id BC42B2989B7; Mon, 30 Jun 2025 16:09:22 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751299762; cv=none; b=SYjKvKCDEYCm+Hna+shZJMqR84uk8Rj9PZj3ljOPeWJQ1shCgPo6czTtbzSEw+VgPPK9IxYea3j5XW5rC1G2Qw0SPjeT2/8zzJSV+SgPti+Bop4SzdjbeFV1mBntirPkM6cNbpAhbvDqeLfq0Lq4cx0gAL2btBxvijupSZx/MWA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751299762; c=relaxed/simple; bh=AX3+47UVrGWP2J2bC/zrXT7p/EnL+R7qPt+i6YI6jIM=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=CeVXSn3BSzVlY7sV+R4VzQ3fpDgtz1vWCFeLPD1VWpbKheN7/hnqpVfGsDZqZHi4JZ/vQstfnN8nm6h32rTgo3IeiGUcPReNo9bUhS2Vg/dhsB502kct2vmOGss9N2nD9eV9zMvU3DDig/4xSvhU+m4799+P5fF1U35958UY05w= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=vPcuVHiJ; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="vPcuVHiJ" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2D6A9C4CEF6; Mon, 30 Jun 2025 16:09:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1751299762; bh=AX3+47UVrGWP2J2bC/zrXT7p/EnL+R7qPt+i6YI6jIM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=vPcuVHiJurbdCF7aricQijbYydODfrhh5o+hU2eU/gFZAUQoAhd+uVCePS7V/xf3z g9igp9NbWAEbRSP1T5WcJANeGwEPuAji4iH/zSnqhCv46/7BNh5Bz3EwbQTgZDlyij uQqene2JT81iGBDoK6X0CzAiWalf0Gx6TwCGRtD5PPXlgWG18Pj1VKBsfODozRfAoO uHxBu1WbSLf1Btsoy2vnHlTJJQ4NIm40Flo+sboXkz5CQ3iVXRrx0LnAK968y82j48 eUMC7BkE8QpcdXd+bR/tOV1Ws0Cui9863CSryKgOVzcFVUzxuzmMqRzO/unm0X8ZOH vLCgIl+9+u2sg== From: Eric Biggers To: linux-crypto@vger.kernel.org Cc: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, sparclinux@vger.kernel.org, x86@kernel.org, Ard Biesheuvel , "Jason A . Donenfeld" , Eric Biggers Subject: [PATCH v2 04/14] lib/crypto: sha256: Remove sha256_blocks_simd() Date: Mon, 30 Jun 2025 09:06:35 -0700 Message-ID: <20250630160645.3198-5-ebiggers@kernel.org> X-Mailer: git-send-email 2.50.0 In-Reply-To: <20250630160645.3198-1-ebiggers@kernel.org> References: <20250630160645.3198-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Instead of having both sha256_blocks_arch() and sha256_blocks_simd(), instead have just sha256_blocks_arch() which uses the most efficient implementation that is available in the calling context. This is simpler, as it reduces the API surface. It's also safer, since sha256_blocks_arch() just works in all contexts, including contexts where the FPU/SIMD/vector registers cannot be used. This doesn't mean that SHA-256 computations *should* be done in such contexts, but rather we should just do the right thing instead of corrupting a random task's registers. Eliminating this footgun and simplifying the code is well worth the very small performance cost of doing the check. Note: in the case of arm and arm64, what used to be sha256_blocks_arch() is renamed back to its original name of sha256_block_data_order(). sha256_blocks_arch() is now used for the higher-level dispatch function. This renaming also required an update to lib/crypto/arm64/sha512.h, since sha2-armv8.pl is shared by both SHA-256 and SHA-512. Signed-off-by: Eric Biggers Acked-by: Ard Biesheuvel --- include/crypto/internal/sha2.h | 6 ------ lib/crypto/Kconfig | 8 -------- lib/crypto/arm/Kconfig | 1 - lib/crypto/arm/sha256-armv4.pl | 20 ++++++++++---------- lib/crypto/arm/sha256.c | 14 +++++++------- lib/crypto/arm64/Kconfig | 1 - lib/crypto/arm64/sha2-armv8.pl | 2 +- lib/crypto/arm64/sha256.c | 14 +++++++------- lib/crypto/arm64/sha512.h | 6 +++--- lib/crypto/riscv/Kconfig | 1 - lib/crypto/riscv/sha256.c | 12 +++--------- lib/crypto/x86/Kconfig | 1 - lib/crypto/x86/sha256.c | 12 +++--------- 13 files changed, 34 insertions(+), 64 deletions(-) diff --git a/include/crypto/internal/sha2.h b/include/crypto/internal/sha2.h index 21a27fd5e198f..5a25ccc493886 100644 --- a/include/crypto/internal/sha2.h +++ b/include/crypto/internal/sha2.h @@ -1,11 +1,10 @@ /* SPDX-License-Identifier: GPL-2.0-only */ =20 #ifndef _CRYPTO_INTERNAL_SHA2_H #define _CRYPTO_INTERNAL_SHA2_H =20 -#include #include #include #include #include #include @@ -20,22 +19,17 @@ static inline bool sha256_is_arch_optimized(void) #endif void sha256_blocks_generic(u32 state[SHA256_STATE_WORDS], const u8 *data, size_t nblocks); void sha256_blocks_arch(u32 state[SHA256_STATE_WORDS], const u8 *data, size_t nblocks); -void sha256_blocks_simd(u32 state[SHA256_STATE_WORDS], - const u8 *data, size_t nblocks); =20 static __always_inline void sha256_choose_blocks( u32 state[SHA256_STATE_WORDS], const u8 *data, size_t nblocks, bool force_generic, bool force_simd) { if (!IS_ENABLED(CONFIG_CRYPTO_ARCH_HAVE_LIB_SHA256) || force_generic) sha256_blocks_generic(state, data, nblocks); - else if (IS_ENABLED(CONFIG_CRYPTO_ARCH_HAVE_LIB_SHA256_SIMD) && - (force_simd || crypto_simd_usable())) - sha256_blocks_simd(state, data, nblocks); else sha256_blocks_arch(state, data, nblocks); } =20 static __always_inline void sha256_finup( diff --git a/lib/crypto/Kconfig b/lib/crypto/Kconfig index 2460ddff967fc..9bd740475a898 100644 --- a/lib/crypto/Kconfig +++ b/lib/crypto/Kconfig @@ -148,18 +148,10 @@ config CRYPTO_ARCH_HAVE_LIB_SHA256 bool help Declares whether the architecture provides an arch-specific accelerated implementation of the SHA-256 library interface. =20 -config CRYPTO_ARCH_HAVE_LIB_SHA256_SIMD - bool - help - Declares whether the architecture provides an arch-specific - accelerated implementation of the SHA-256 library interface - that is SIMD-based and therefore not usable in hardirq - context. - config CRYPTO_LIB_SHA256_GENERIC tristate default CRYPTO_LIB_SHA256 if !CRYPTO_ARCH_HAVE_LIB_SHA256 help This symbol can be selected by arch implementations of the SHA-256 diff --git a/lib/crypto/arm/Kconfig b/lib/crypto/arm/Kconfig index d1ad664f0c674..9f3ff30f40328 100644 --- a/lib/crypto/arm/Kconfig +++ b/lib/crypto/arm/Kconfig @@ -26,6 +26,5 @@ config CRYPTO_POLY1305_ARM config CRYPTO_SHA256_ARM tristate depends on !CPU_V7M default CRYPTO_LIB_SHA256 select CRYPTO_ARCH_HAVE_LIB_SHA256 - select CRYPTO_ARCH_HAVE_LIB_SHA256_SIMD diff --git a/lib/crypto/arm/sha256-armv4.pl b/lib/crypto/arm/sha256-armv4.pl index 8122db7fd5990..f3a2b54efd4ee 100644 --- a/lib/crypto/arm/sha256-armv4.pl +++ b/lib/crypto/arm/sha256-armv4.pl @@ -202,22 +202,22 @@ K256: .word 0x90befffa,0xa4506ceb,0xbef9a3f7,0xc67178f2 .size K256,.-K256 .word 0 @ terminator #if __ARM_MAX_ARCH__>=3D7 && !defined(__KERNEL__) .LOPENSSL_armcap: -.word OPENSSL_armcap_P-sha256_blocks_arch +.word OPENSSL_armcap_P-sha256_block_data_order #endif .align 5 =20 -.global sha256_blocks_arch -.type sha256_blocks_arch,%function -sha256_blocks_arch: -.Lsha256_blocks_arch: +.global sha256_block_data_order +.type sha256_block_data_order,%function +sha256_block_data_order: +.Lsha256_block_data_order: #if __ARM_ARCH__<7 - sub r3,pc,#8 @ sha256_blocks_arch + sub r3,pc,#8 @ sha256_block_data_order #else - adr r3,.Lsha256_blocks_arch + adr r3,.Lsha256_block_data_order #endif #if __ARM_MAX_ARCH__>=3D7 && !defined(__KERNEL__) ldr r12,.LOPENSSL_armcap ldr r12,[r3,r12] @ OPENSSL_armcap_P tst r12,#ARMV8_SHA256 @@ -280,11 +280,11 @@ $code.=3D<<___; ldmia sp!,{r4-r11,lr} tst lr,#1 moveq pc,lr @ be binary compatible with V4, yet bx lr @ interoperable with Thumb ISA:-) #endif -.size sha256_blocks_arch,.-sha256_blocks_arch +.size sha256_block_data_order,.-sha256_block_data_order ___ ###################################################################### # NEON stuff # {{{ @@ -468,12 +468,12 @@ $code.=3D<<___; sha256_block_data_order_neon: .LNEON: stmdb sp!,{r4-r12,lr} =20 sub $H,sp,#16*4+16 - adr $Ktbl,.Lsha256_blocks_arch - sub $Ktbl,$Ktbl,#.Lsha256_blocks_arch-K256 + adr $Ktbl,.Lsha256_block_data_order + sub $Ktbl,$Ktbl,#.Lsha256_block_data_order-K256 bic $H,$H,#15 @ align for 128-bit stores mov $t2,sp mov sp,$H @ alloca add $len,$inp,$len,lsl#6 @ len to point at the end of inp =20 diff --git a/lib/crypto/arm/sha256.c b/lib/crypto/arm/sha256.c index 109192e54b0f0..2c9cfdaaa0691 100644 --- a/lib/crypto/arm/sha256.c +++ b/lib/crypto/arm/sha256.c @@ -4,40 +4,40 @@ * * Copyright 2025 Google LLC */ #include #include +#include #include #include =20 -asmlinkage void sha256_blocks_arch(u32 state[SHA256_STATE_WORDS], - const u8 *data, size_t nblocks); -EXPORT_SYMBOL_GPL(sha256_blocks_arch); +asmlinkage void sha256_block_data_order(u32 state[SHA256_STATE_WORDS], + const u8 *data, size_t nblocks); asmlinkage void sha256_block_data_order_neon(u32 state[SHA256_STATE_WORDS], const u8 *data, size_t nblocks); asmlinkage void sha256_ce_transform(u32 state[SHA256_STATE_WORDS], const u8 *data, size_t nblocks); =20 static __ro_after_init DEFINE_STATIC_KEY_FALSE(have_neon); static __ro_after_init DEFINE_STATIC_KEY_FALSE(have_ce); =20 -void sha256_blocks_simd(u32 state[SHA256_STATE_WORDS], +void sha256_blocks_arch(u32 state[SHA256_STATE_WORDS], const u8 *data, size_t nblocks) { if (IS_ENABLED(CONFIG_KERNEL_MODE_NEON) && - static_branch_likely(&have_neon)) { + static_branch_likely(&have_neon) && crypto_simd_usable()) { kernel_neon_begin(); if (static_branch_likely(&have_ce)) sha256_ce_transform(state, data, nblocks); else sha256_block_data_order_neon(state, data, nblocks); kernel_neon_end(); } else { - sha256_blocks_arch(state, data, nblocks); + sha256_block_data_order(state, data, nblocks); } } -EXPORT_SYMBOL_GPL(sha256_blocks_simd); +EXPORT_SYMBOL_GPL(sha256_blocks_arch); =20 bool sha256_is_arch_optimized(void) { /* We always can use at least the ARM scalar implementation. */ return true; diff --git a/lib/crypto/arm64/Kconfig b/lib/crypto/arm64/Kconfig index 129a7685cb4c1..49e57bfdb5b52 100644 --- a/lib/crypto/arm64/Kconfig +++ b/lib/crypto/arm64/Kconfig @@ -15,6 +15,5 @@ config CRYPTO_POLY1305_NEON =20 config CRYPTO_SHA256_ARM64 tristate default CRYPTO_LIB_SHA256 select CRYPTO_ARCH_HAVE_LIB_SHA256 - select CRYPTO_ARCH_HAVE_LIB_SHA256_SIMD diff --git a/lib/crypto/arm64/sha2-armv8.pl b/lib/crypto/arm64/sha2-armv8.pl index 4aebd20c498bc..35ec9ae99fe16 100644 --- a/lib/crypto/arm64/sha2-armv8.pl +++ b/lib/crypto/arm64/sha2-armv8.pl @@ -93,11 +93,11 @@ if ($output =3D~ /512/) { @sigma1=3D(17,19,10); $rounds=3D64; $reg_t=3D"w"; } =20 -$func=3D"sha${BITS}_blocks_arch"; +$func=3D"sha${BITS}_block_data_order"; =20 ($ctx,$inp,$num,$Ktbl)=3Dmap("x$_",(0..2,30)); =20 @X=3Dmap("$reg_t$_",(3..15,0..2)); @V=3D($A,$B,$C,$D,$E,$F,$G,$H)=3Dmap("$reg_t$_",(20..27)); diff --git a/lib/crypto/arm64/sha256.c b/lib/crypto/arm64/sha256.c index bcf7a3adc0c46..fb9bff40357be 100644 --- a/lib/crypto/arm64/sha256.c +++ b/lib/crypto/arm64/sha256.c @@ -4,29 +4,29 @@ * * Copyright 2025 Google LLC */ #include #include +#include #include #include =20 -asmlinkage void sha256_blocks_arch(u32 state[SHA256_STATE_WORDS], - const u8 *data, size_t nblocks); -EXPORT_SYMBOL_GPL(sha256_blocks_arch); +asmlinkage void sha256_block_data_order(u32 state[SHA256_STATE_WORDS], + const u8 *data, size_t nblocks); asmlinkage void sha256_block_neon(u32 state[SHA256_STATE_WORDS], const u8 *data, size_t nblocks); asmlinkage size_t __sha256_ce_transform(u32 state[SHA256_STATE_WORDS], const u8 *data, size_t nblocks); =20 static __ro_after_init DEFINE_STATIC_KEY_FALSE(have_neon); static __ro_after_init DEFINE_STATIC_KEY_FALSE(have_ce); =20 -void sha256_blocks_simd(u32 state[SHA256_STATE_WORDS], +void sha256_blocks_arch(u32 state[SHA256_STATE_WORDS], const u8 *data, size_t nblocks) { if (IS_ENABLED(CONFIG_KERNEL_MODE_NEON) && - static_branch_likely(&have_neon)) { + static_branch_likely(&have_neon) && crypto_simd_usable()) { if (static_branch_likely(&have_ce)) { do { size_t rem; =20 kernel_neon_begin(); @@ -40,14 +40,14 @@ void sha256_blocks_simd(u32 state[SHA256_STATE_WORDS], kernel_neon_begin(); sha256_block_neon(state, data, nblocks); kernel_neon_end(); } } else { - sha256_blocks_arch(state, data, nblocks); + sha256_block_data_order(state, data, nblocks); } } -EXPORT_SYMBOL_GPL(sha256_blocks_simd); +EXPORT_SYMBOL_GPL(sha256_blocks_arch); =20 bool sha256_is_arch_optimized(void) { /* We always can use at least the ARM64 scalar implementation. */ return true; diff --git a/lib/crypto/arm64/sha512.h b/lib/crypto/arm64/sha512.h index eae14f9752e0b..6abb40b467f2e 100644 --- a/lib/crypto/arm64/sha512.h +++ b/lib/crypto/arm64/sha512.h @@ -9,12 +9,12 @@ #include #include =20 static __ro_after_init DEFINE_STATIC_KEY_FALSE(have_sha512_insns); =20 -asmlinkage void sha512_blocks_arch(struct sha512_block_state *state, - const u8 *data, size_t nblocks); +asmlinkage void sha512_block_data_order(struct sha512_block_state *state, + const u8 *data, size_t nblocks); asmlinkage size_t __sha512_ce_transform(struct sha512_block_state *state, const u8 *data, size_t nblocks); =20 static void sha512_blocks(struct sha512_block_state *state, const u8 *data, size_t nblocks) @@ -30,11 +30,11 @@ static void sha512_blocks(struct sha512_block_state *st= ate, kernel_neon_end(); data +=3D (nblocks - rem) * SHA512_BLOCK_SIZE; nblocks =3D rem; } while (nblocks); } else { - sha512_blocks_arch(state, data, nblocks); + sha512_block_data_order(state, data, nblocks); } } =20 #ifdef CONFIG_KERNEL_MODE_NEON #define sha512_mod_init_arch sha512_mod_init_arch diff --git a/lib/crypto/riscv/Kconfig b/lib/crypto/riscv/Kconfig index 47c99ea97ce2c..c100571feb7e8 100644 --- a/lib/crypto/riscv/Kconfig +++ b/lib/crypto/riscv/Kconfig @@ -10,7 +10,6 @@ config CRYPTO_CHACHA_RISCV64 config CRYPTO_SHA256_RISCV64 tristate depends on 64BIT && RISCV_ISA_V && TOOLCHAIN_HAS_VECTOR_CRYPTO default CRYPTO_LIB_SHA256 select CRYPTO_ARCH_HAVE_LIB_SHA256 - select CRYPTO_ARCH_HAVE_LIB_SHA256_SIMD select CRYPTO_LIB_SHA256_GENERIC diff --git a/lib/crypto/riscv/sha256.c b/lib/crypto/riscv/sha256.c index 71808397dff4c..aa77349d08f30 100644 --- a/lib/crypto/riscv/sha256.c +++ b/lib/crypto/riscv/sha256.c @@ -9,36 +9,30 @@ * Author: Jerry Shih */ =20 #include #include +#include #include #include =20 asmlinkage void sha256_transform_zvknha_or_zvknhb_zvkb( u32 state[SHA256_STATE_WORDS], const u8 *data, size_t nblocks); =20 static __ro_after_init DEFINE_STATIC_KEY_FALSE(have_extensions); =20 -void sha256_blocks_simd(u32 state[SHA256_STATE_WORDS], +void sha256_blocks_arch(u32 state[SHA256_STATE_WORDS], const u8 *data, size_t nblocks) { - if (static_branch_likely(&have_extensions)) { + if (static_branch_likely(&have_extensions) && crypto_simd_usable()) { kernel_vector_begin(); sha256_transform_zvknha_or_zvknhb_zvkb(state, data, nblocks); kernel_vector_end(); } else { sha256_blocks_generic(state, data, nblocks); } } -EXPORT_SYMBOL_GPL(sha256_blocks_simd); - -void sha256_blocks_arch(u32 state[SHA256_STATE_WORDS], - const u8 *data, size_t nblocks) -{ - sha256_blocks_generic(state, data, nblocks); -} EXPORT_SYMBOL_GPL(sha256_blocks_arch); =20 bool sha256_is_arch_optimized(void) { return static_key_enabled(&have_extensions); diff --git a/lib/crypto/x86/Kconfig b/lib/crypto/x86/Kconfig index 5e94cdee492c2..e344579db3d85 100644 --- a/lib/crypto/x86/Kconfig +++ b/lib/crypto/x86/Kconfig @@ -28,7 +28,6 @@ config CRYPTO_POLY1305_X86_64 config CRYPTO_SHA256_X86_64 tristate depends on 64BIT default CRYPTO_LIB_SHA256 select CRYPTO_ARCH_HAVE_LIB_SHA256 - select CRYPTO_ARCH_HAVE_LIB_SHA256_SIMD select CRYPTO_LIB_SHA256_GENERIC diff --git a/lib/crypto/x86/sha256.c b/lib/crypto/x86/sha256.c index 80380f8fdcee4..baba74d7d26f2 100644 --- a/lib/crypto/x86/sha256.c +++ b/lib/crypto/x86/sha256.c @@ -4,10 +4,11 @@ * * Copyright 2025 Google LLC */ #include #include +#include #include #include #include =20 asmlinkage void sha256_transform_ssse3(u32 state[SHA256_STATE_WORDS], @@ -21,28 +22,21 @@ asmlinkage void sha256_ni_transform(u32 state[SHA256_ST= ATE_WORDS], =20 static __ro_after_init DEFINE_STATIC_KEY_FALSE(have_sha256_x86); =20 DEFINE_STATIC_CALL(sha256_blocks_x86, sha256_transform_ssse3); =20 -void sha256_blocks_simd(u32 state[SHA256_STATE_WORDS], +void sha256_blocks_arch(u32 state[SHA256_STATE_WORDS], const u8 *data, size_t nblocks) { - if (static_branch_likely(&have_sha256_x86)) { + if (static_branch_likely(&have_sha256_x86) && crypto_simd_usable()) { kernel_fpu_begin(); static_call(sha256_blocks_x86)(state, data, nblocks); kernel_fpu_end(); } else { sha256_blocks_generic(state, data, nblocks); } } -EXPORT_SYMBOL_GPL(sha256_blocks_simd); - -void sha256_blocks_arch(u32 state[SHA256_STATE_WORDS], - const u8 *data, size_t nblocks) -{ - sha256_blocks_generic(state, data, nblocks); -} EXPORT_SYMBOL_GPL(sha256_blocks_arch); =20 bool sha256_is_arch_optimized(void) { return static_key_enabled(&have_sha256_x86); --=20 2.50.0 From nobody Sun Sep 7 11:34:13 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 7A7C829A9E4; Mon, 30 Jun 2025 16:09:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751299763; cv=none; b=JMVyRUiefSlJUSnK4AhFjmgpjKZM2/RiaIh72h0U9K08uk07uKwiArfglBzVd0OzC9U7jNhhL5LT2/+5nhnqC+1tYQSEhgvKh+rZm0CXGQcRGXmXX6pZqFaCdZv4sVNFatqLIkDX67WwfjknjabXOGkZwe1mrErHEQGNgFwTmbc= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751299763; c=relaxed/simple; bh=6di/lemdUUuURN063IywyRZhcwaKdCBZPfLQhM19GXE=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=EK40qn/cY/2L2DSxjhTWXA9azxJsTE5DQPQuT+rdBAUFwYGBVyqFGeDG7tuQVtF3AYyXvKTwHgcysm11s1rlOUCav5RAmYRzuZVkZC1IHlP147NPwvvT3O1+e+Y15G97KXiwcfL9T7o31y4KTtzcKejvBEfO8wzDdGKfqY6yIzA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=YUEz13tr; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="YUEz13tr" Received: by smtp.kernel.org (Postfix) with ESMTPSA id AB4A6C4CEF2; Mon, 30 Jun 2025 16:09:22 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1751299763; bh=6di/lemdUUuURN063IywyRZhcwaKdCBZPfLQhM19GXE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=YUEz13trRmGOF1EnTtptvIlzO1jaL21f4T0TLFxkW98FzmMLj6qDaW0ODvpbNWxVx tfzJ5ASzjGW+jCVRJoRjoHWBYInTKNgLLwpEMauVZBla27IPaaN+Sguq8bUNybwcZ4 m+N6TBi5X7Aay6IJ+JyWUd5O6u3c4w4eAXAtYJv0zGGmEokWNCcro5PhDuNp1zwNIV 7GOiv6OLqYq9c2mtptz2nJBsjhusfEek4rvPdyXbAKJYVSSGkIMS15fAWfrnJ7CW2r CftvB1tqiZa/uHE5fCme+6m0uY6xzuSfr5q5bF9mchP3IO8KvUpNaXsOLzSyGIIoJR H3sy/rJqdx/Hw== From: Eric Biggers To: linux-crypto@vger.kernel.org Cc: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, sparclinux@vger.kernel.org, x86@kernel.org, Ard Biesheuvel , "Jason A . Donenfeld" , Eric Biggers Subject: [PATCH v2 05/14] lib/crypto: sha256: Add sha224() and sha224_update() Date: Mon, 30 Jun 2025 09:06:36 -0700 Message-ID: <20250630160645.3198-6-ebiggers@kernel.org> X-Mailer: git-send-email 2.50.0 In-Reply-To: <20250630160645.3198-1-ebiggers@kernel.org> References: <20250630160645.3198-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add a one-shot SHA-224 computation function sha224(), for consistency with sha256(), sha384(), and sha512() which all already exist. Similarly, add sha224_update(). While for now it's identical to sha256_update(), omitting it makes the API harder to use since users have to "know" which functions are the same between SHA-224 and SHA-256. Also, this is a prerequisite for using different context types for each. Signed-off-by: Eric Biggers Acked-by: Ard Biesheuvel --- include/crypto/sha2.h | 10 ++++++++-- lib/crypto/sha256.c | 10 ++++++++++ 2 files changed, 18 insertions(+), 2 deletions(-) diff --git a/include/crypto/sha2.h b/include/crypto/sha2.h index bb181b7996cdc..e31da0743a522 100644 --- a/include/crypto/sha2.h +++ b/include/crypto/sha2.h @@ -112,22 +112,28 @@ struct sha512_state { u64 state[SHA512_DIGEST_SIZE / 8]; u64 count[2]; u8 buf[SHA512_BLOCK_SIZE]; }; =20 +void sha256_update(struct sha256_state *sctx, const u8 *data, size_t len); + static inline void sha224_init(struct sha256_state *sctx) { sha224_block_init(&sctx->ctx); } -/* Simply use sha256_update as it is equivalent to sha224_update. */ +static inline void sha224_update(struct sha256_state *sctx, + const u8 *data, size_t len) +{ + sha256_update(sctx, data, len); +} void sha224_final(struct sha256_state *sctx, u8 out[SHA224_DIGEST_SIZE]); +void sha224(const u8 *data, size_t len, u8 out[SHA224_DIGEST_SIZE]); =20 static inline void sha256_init(struct sha256_state *sctx) { sha256_block_init(&sctx->ctx); } -void sha256_update(struct sha256_state *sctx, const u8 *data, size_t len); void sha256_final(struct sha256_state *sctx, u8 out[SHA256_DIGEST_SIZE]); void sha256(const u8 *data, size_t len, u8 out[SHA256_DIGEST_SIZE]); =20 /* State for the SHA-512 (and SHA-384) compression function */ struct sha512_block_state { diff --git a/lib/crypto/sha256.c b/lib/crypto/sha256.c index 573ccecbf48bf..ccaae70880166 100644 --- a/lib/crypto/sha256.c +++ b/lib/crypto/sha256.c @@ -68,10 +68,20 @@ void sha256_final(struct sha256_state *sctx, u8 out[SHA= 256_DIGEST_SIZE]) { __sha256_final(sctx, out, SHA256_DIGEST_SIZE); } EXPORT_SYMBOL(sha256_final); =20 +void sha224(const u8 *data, size_t len, u8 out[SHA224_DIGEST_SIZE]) +{ + struct sha256_state sctx; + + sha224_init(&sctx); + sha224_update(&sctx, data, len); + sha224_final(&sctx, out); +} +EXPORT_SYMBOL(sha224); + void sha256(const u8 *data, size_t len, u8 out[SHA256_DIGEST_SIZE]) { struct sha256_state sctx; =20 sha256_init(&sctx); --=20 2.50.0 From nobody Sun Sep 7 11:34:13 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 04AA8290080; Mon, 30 Jun 2025 16:09:23 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751299764; cv=none; b=NAbHeR527v24D161Zpm8kJnUeez+GcYIrj1Hv0AJSnoNwEMxZE1EwWPfO53XUucdAROAh/KyjnrVzycPTJie/fuZu3o5TD1qarv2iqk2Sji+/SFI+BVQXzF/soD4ELm1SB2Dt9bv1X/OIJFDmKyJDNAPOMu3gWxz77ACkTCOkTE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751299764; c=relaxed/simple; bh=ROSuDg+acrxeVuKH4kzaDK5UEZ7KG7ix2MxLeoctiw8=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Jsxx1Zu+R8SwPtsiJBI3S3+wbCcwuJ8pVyLSn6dXwsfvk3Ms7jkrF4atkl+LyNhrMnWzx9+HmNpY8zdgGLF3Ogynitpet3v6gU/UfKbpiMD6ZxyfF3XsCtV0heqOx6Oe4ejZTCcBpnL01T/wdiwZJOxhcPZ7NYvn2isnpQSPraU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=uJU/z1sv; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="uJU/z1sv" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 35CD0C4CEF3; Mon, 30 Jun 2025 16:09:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1751299763; bh=ROSuDg+acrxeVuKH4kzaDK5UEZ7KG7ix2MxLeoctiw8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=uJU/z1sv9CCeVa5wUp0YRKYkwt8VxWa5ArV1UvR4AvEbeWC8xRil6XmrSl11H2Hqi Uo+xyyvvPzz5xS4mvdUQJRbwpJ8ZiqOn+h1q8si/P8g299fIOwU7Wqan96eZq5Oghn JXaZ05m0OvPgRDj+SX5aXcvfhUkwp8DnHT/WtYhwRCkP161NlE8tUQ2k+HiQgPjvNY BSCsVLbd3sVvcTeyeV58tV+cacS28M/9HcYfdR/lKmZW/oK96H3Hf5IVIdxO5tzc8T x2j/aBuavHwk6qoEdR21kmbnkijGt3RM6NGgIjT6e9FZZHUahvVqypyPXfh+bN6tCg ewd1pOoi/5WcQ== From: Eric Biggers To: linux-crypto@vger.kernel.org Cc: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, sparclinux@vger.kernel.org, x86@kernel.org, Ard Biesheuvel , "Jason A . Donenfeld" , Eric Biggers Subject: [PATCH v2 06/14] lib/crypto: sha256: Make library API use strongly-typed contexts Date: Mon, 30 Jun 2025 09:06:37 -0700 Message-ID: <20250630160645.3198-7-ebiggers@kernel.org> X-Mailer: git-send-email 2.50.0 In-Reply-To: <20250630160645.3198-1-ebiggers@kernel.org> References: <20250630160645.3198-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Currently the SHA-224 and SHA-256 library functions can be mixed arbitrarily, even in ways that are incorrect, for example using sha224_init() and sha256_final(). This is because they operate on the same structure, sha256_state. Introduce stronger typing, as I did for SHA-384 and SHA-512. Also as I did for SHA-384 and SHA-512, use the names *_ctx instead of *_state. The *_ctx names have the following small benefits: - They're shorter. - They avoid an ambiguity with the compression function state. - They're consistent with the well-known OpenSSL API. - Users usually name the variable 'sctx' anyway, which suggests that *_ctx would be the more natural name for the actual struct. Therefore: update the SHA-224 and SHA-256 APIs, implementation, and calling code accordingly. In the new structs, also strongly-type the compression function state. Signed-off-by: Eric Biggers Acked-by: Ard Biesheuvel --- arch/riscv/purgatory/purgatory.c | 8 +-- arch/s390/purgatory/purgatory.c | 2 +- arch/x86/purgatory/purgatory.c | 2 +- crypto/sha256.c | 16 ++--- drivers/char/tpm/tpm2-sessions.c | 12 ++-- include/crypto/sha2.h | 52 ++++++++++++---- kernel/kexec_file.c | 10 ++-- lib/crypto/sha256.c | 100 ++++++++++++++++++++++--------- 8 files changed, 139 insertions(+), 63 deletions(-) diff --git a/arch/riscv/purgatory/purgatory.c b/arch/riscv/purgatory/purgat= ory.c index 80596ab5fb622..bbd5cfa4d7412 100644 --- a/arch/riscv/purgatory/purgatory.c +++ b/arch/riscv/purgatory/purgatory.c @@ -18,18 +18,18 @@ u8 purgatory_sha256_digest[SHA256_DIGEST_SIZE] __sectio= n(".kexec-purgatory"); struct kexec_sha_region purgatory_sha_regions[KEXEC_SEGMENT_MAX] __section= (".kexec-purgatory"); =20 static int verify_sha256_digest(void) { struct kexec_sha_region *ptr, *end; - struct sha256_state ss; + struct sha256_ctx sctx; u8 digest[SHA256_DIGEST_SIZE]; =20 - sha256_init(&ss); + sha256_init(&sctx); end =3D purgatory_sha_regions + ARRAY_SIZE(purgatory_sha_regions); for (ptr =3D purgatory_sha_regions; ptr < end; ptr++) - sha256_update(&ss, (uint8_t *)(ptr->start), ptr->len); - sha256_final(&ss, digest); + sha256_update(&sctx, (uint8_t *)(ptr->start), ptr->len); + sha256_final(&sctx, digest); if (memcmp(digest, purgatory_sha256_digest, sizeof(digest)) !=3D 0) return 1; return 0; } =20 diff --git a/arch/s390/purgatory/purgatory.c b/arch/s390/purgatory/purgator= y.c index 030efda05dbe5..ecb38102187c2 100644 --- a/arch/s390/purgatory/purgatory.c +++ b/arch/s390/purgatory/purgatory.c @@ -14,11 +14,11 @@ =20 int verify_sha256_digest(void) { struct kexec_sha_region *ptr, *end; u8 digest[SHA256_DIGEST_SIZE]; - struct sha256_state sctx; + struct sha256_ctx sctx; =20 sha256_init(&sctx); end =3D purgatory_sha_regions + ARRAY_SIZE(purgatory_sha_regions); =20 for (ptr =3D purgatory_sha_regions; ptr < end; ptr++) diff --git a/arch/x86/purgatory/purgatory.c b/arch/x86/purgatory/purgatory.c index aea47e7939637..655139dd05325 100644 --- a/arch/x86/purgatory/purgatory.c +++ b/arch/x86/purgatory/purgatory.c @@ -23,11 +23,11 @@ struct kexec_sha_region purgatory_sha_regions[KEXEC_SEG= MENT_MAX] __section(".kex =20 static int verify_sha256_digest(void) { struct kexec_sha_region *ptr, *end; u8 digest[SHA256_DIGEST_SIZE]; - struct sha256_state sctx; + struct sha256_ctx sctx; =20 sha256_init(&sctx); end =3D purgatory_sha_regions + ARRAY_SIZE(purgatory_sha_regions); =20 for (ptr =3D purgatory_sha_regions; ptr < end; ptr++) diff --git a/crypto/sha256.c b/crypto/sha256.c index 4aeb213bab117..15c57fba256b7 100644 --- a/crypto/sha256.c +++ b/crypto/sha256.c @@ -135,28 +135,28 @@ static int crypto_sha224_final_lib(struct shash_desc = *desc, u8 *out) return 0; } =20 static int crypto_sha256_import_lib(struct shash_desc *desc, const void *i= n) { - struct sha256_state *sctx =3D shash_desc_ctx(desc); + struct __sha256_ctx *sctx =3D shash_desc_ctx(desc); const u8 *p =3D in; =20 memcpy(sctx, p, sizeof(*sctx)); p +=3D sizeof(*sctx); - sctx->count +=3D *p; + sctx->bytecount +=3D *p; return 0; } =20 static int crypto_sha256_export_lib(struct shash_desc *desc, void *out) { - struct sha256_state *sctx0 =3D shash_desc_ctx(desc); - struct sha256_state sctx =3D *sctx0; + struct __sha256_ctx *sctx0 =3D shash_desc_ctx(desc); + struct __sha256_ctx sctx =3D *sctx0; unsigned int partial; u8 *p =3D out; =20 - partial =3D sctx.count % SHA256_BLOCK_SIZE; - sctx.count -=3D partial; + partial =3D sctx.bytecount % SHA256_BLOCK_SIZE; + sctx.bytecount -=3D partial; memcpy(p, &sctx, sizeof(sctx)); p +=3D sizeof(sctx); *p =3D partial; return 0; } @@ -199,11 +199,11 @@ static struct shash_alg algs[] =3D { .digestsize =3D SHA256_DIGEST_SIZE, .init =3D crypto_sha256_init, .update =3D crypto_sha256_update_lib, .final =3D crypto_sha256_final_lib, .digest =3D crypto_sha256_digest_lib, - .descsize =3D sizeof(struct sha256_state), + .descsize =3D sizeof(struct sha256_ctx), .statesize =3D sizeof(struct crypto_sha256_state) + SHA256_BLOCK_SIZE + 1, .import =3D crypto_sha256_import_lib, .export =3D crypto_sha256_export_lib, }, @@ -214,11 +214,11 @@ static struct shash_alg algs[] =3D { .base.cra_module =3D THIS_MODULE, .digestsize =3D SHA224_DIGEST_SIZE, .init =3D crypto_sha224_init, .update =3D crypto_sha256_update_lib, .final =3D crypto_sha224_final_lib, - .descsize =3D sizeof(struct sha256_state), + .descsize =3D sizeof(struct sha224_ctx), .statesize =3D sizeof(struct crypto_sha256_state) + SHA256_BLOCK_SIZE + 1, .import =3D crypto_sha256_import_lib, .export =3D crypto_sha256_export_lib, }, diff --git a/drivers/char/tpm/tpm2-sessions.c b/drivers/char/tpm/tpm2-sessi= ons.c index 7b5049b3d476e..bdb119453dfbe 100644 --- a/drivers/char/tpm/tpm2-sessions.c +++ b/drivers/char/tpm/tpm2-sessions.c @@ -388,11 +388,11 @@ static int tpm2_create_primary(struct tpm_chip *chip,= u32 hierarchy, * It turns out the crypto hmac(sha256) is hard for us to consume * because it assumes a fixed key and the TPM seems to change the key * on every operation, so we weld the hmac init and final functions in * here to give it the same usage characteristics as a regular hash */ -static void tpm2_hmac_init(struct sha256_state *sctx, u8 *key, u32 key_len) +static void tpm2_hmac_init(struct sha256_ctx *sctx, u8 *key, u32 key_len) { u8 pad[SHA256_BLOCK_SIZE]; int i; =20 sha256_init(sctx); @@ -404,11 +404,11 @@ static void tpm2_hmac_init(struct sha256_state *sctx,= u8 *key, u32 key_len) pad[i] ^=3D HMAC_IPAD_VALUE; } sha256_update(sctx, pad, sizeof(pad)); } =20 -static void tpm2_hmac_final(struct sha256_state *sctx, u8 *key, u32 key_le= n, +static void tpm2_hmac_final(struct sha256_ctx *sctx, u8 *key, u32 key_len, u8 *out) { u8 pad[SHA256_BLOCK_SIZE]; int i; =20 @@ -438,11 +438,11 @@ static void tpm2_KDFa(u8 *key, u32 key_len, const cha= r *label, u8 *u, { u32 counter =3D 1; const __be32 bits =3D cpu_to_be32(bytes * 8); =20 while (bytes > 0) { - struct sha256_state sctx; + struct sha256_ctx sctx; __be32 c =3D cpu_to_be32(counter); =20 tpm2_hmac_init(&sctx, key, key_len); sha256_update(&sctx, (u8 *)&c, sizeof(c)); sha256_update(&sctx, label, strlen(label)+1); @@ -465,11 +465,11 @@ static void tpm2_KDFa(u8 *key, u32 key_len, const cha= r *label, u8 *u, * in this KDF. */ static void tpm2_KDFe(u8 z[EC_PT_SZ], const char *str, u8 *pt_u, u8 *pt_v, u8 *out) { - struct sha256_state sctx; + struct sha256_ctx sctx; /* * this should be an iterative counter, but because we know * we're only taking 32 bytes for the point using a sha256 * hash which is also 32 bytes, there's only one loop */ @@ -590,11 +590,11 @@ void tpm_buf_fill_hmac_session(struct tpm_chip *chip,= struct tpm_buf *buf) struct tpm_header *head =3D (struct tpm_header *)buf->data; off_t offset_s =3D TPM_HEADER_SIZE, offset_p; u8 *hmac =3D NULL; u32 attrs; u8 cphash[SHA256_DIGEST_SIZE]; - struct sha256_state sctx; + struct sha256_ctx sctx; =20 if (!auth) return; =20 /* save the command code in BE format */ @@ -748,11 +748,11 @@ int tpm_buf_check_hmac_response(struct tpm_chip *chip= , struct tpm_buf *buf, struct tpm_header *head =3D (struct tpm_header *)buf->data; struct tpm2_auth *auth =3D chip->auth; off_t offset_s, offset_p; u8 rphash[SHA256_DIGEST_SIZE]; u32 attrs, cc; - struct sha256_state sctx; + struct sha256_ctx sctx; u16 tag =3D be16_to_cpu(head->tag); int parm_len, len, i, handles; =20 if (!auth) return rc; diff --git a/include/crypto/sha2.h b/include/crypto/sha2.h index e31da0743a522..18e1eec841b71 100644 --- a/include/crypto/sha2.h +++ b/include/crypto/sha2.h @@ -112,29 +112,59 @@ struct sha512_state { u64 state[SHA512_DIGEST_SIZE / 8]; u64 count[2]; u8 buf[SHA512_BLOCK_SIZE]; }; =20 -void sha256_update(struct sha256_state *sctx, const u8 *data, size_t len); +/* State for the SHA-256 (and SHA-224) compression function */ +struct sha256_block_state { + u32 h[SHA256_STATE_WORDS]; +}; =20 -static inline void sha224_init(struct sha256_state *sctx) -{ - sha224_block_init(&sctx->ctx); -} -static inline void sha224_update(struct sha256_state *sctx, +/* + * Context structure, shared by SHA-224 and SHA-256. The sha224_ctx and + * sha256_ctx structs wrap this one so that the API has proper typing and + * doesn't allow mixing the SHA-224 and SHA-256 functions arbitrarily. + */ +struct __sha256_ctx { + struct sha256_block_state state; + u64 bytecount; + u8 buf[SHA256_BLOCK_SIZE] __aligned(__alignof__(__be64)); +}; +void __sha256_update(struct __sha256_ctx *ctx, const u8 *data, size_t len); + +/** + * struct sha224_ctx - Context for hashing a message with SHA-224 + * @ctx: private + */ +struct sha224_ctx { + struct __sha256_ctx ctx; +}; + +void sha224_init(struct sha224_ctx *ctx); +static inline void sha224_update(struct sha224_ctx *ctx, const u8 *data, size_t len) { - sha256_update(sctx, data, len); + __sha256_update(&ctx->ctx, data, len); } -void sha224_final(struct sha256_state *sctx, u8 out[SHA224_DIGEST_SIZE]); +void sha224_final(struct sha224_ctx *ctx, u8 out[SHA224_DIGEST_SIZE]); void sha224(const u8 *data, size_t len, u8 out[SHA224_DIGEST_SIZE]); =20 -static inline void sha256_init(struct sha256_state *sctx) +/** + * struct sha256_ctx - Context for hashing a message with SHA-256 + * @ctx: private + */ +struct sha256_ctx { + struct __sha256_ctx ctx; +}; + +void sha256_init(struct sha256_ctx *ctx); +static inline void sha256_update(struct sha256_ctx *ctx, + const u8 *data, size_t len) { - sha256_block_init(&sctx->ctx); + __sha256_update(&ctx->ctx, data, len); } -void sha256_final(struct sha256_state *sctx, u8 out[SHA256_DIGEST_SIZE]); +void sha256_final(struct sha256_ctx *ctx, u8 out[SHA256_DIGEST_SIZE]); void sha256(const u8 *data, size_t len, u8 out[SHA256_DIGEST_SIZE]); =20 /* State for the SHA-512 (and SHA-384) compression function */ struct sha512_block_state { u64 h[8]; diff --git a/kernel/kexec_file.c b/kernel/kexec_file.c index 69fe76fd92334..b835033c65eb1 100644 --- a/kernel/kexec_file.c +++ b/kernel/kexec_file.c @@ -749,11 +749,11 @@ int kexec_add_buffer(struct kexec_buf *kbuf) } =20 /* Calculate and store the digest of segments */ static int kexec_calculate_store_digests(struct kimage *image) { - struct sha256_state state; + struct sha256_ctx sctx; int ret =3D 0, i, j, zero_buf_sz, sha_region_sz; size_t nullsz; u8 digest[SHA256_DIGEST_SIZE]; void *zero_buf; struct kexec_sha_region *sha_regions; @@ -768,11 +768,11 @@ static int kexec_calculate_store_digests(struct kimag= e *image) sha_region_sz =3D KEXEC_SEGMENT_MAX * sizeof(struct kexec_sha_region); sha_regions =3D vzalloc(sha_region_sz); if (!sha_regions) return -ENOMEM; =20 - sha256_init(&state); + sha256_init(&sctx); =20 for (j =3D i =3D 0; i < image->nr_segments; i++) { struct kexec_segment *ksegment; =20 #ifdef CONFIG_CRASH_HOTPLUG @@ -794,11 +794,11 @@ static int kexec_calculate_store_digests(struct kimag= e *image) * the current index */ if (check_ima_segment_index(image, i)) continue; =20 - sha256_update(&state, ksegment->kbuf, ksegment->bufsz); + sha256_update(&sctx, ksegment->kbuf, ksegment->bufsz); =20 /* * Assume rest of the buffer is filled with zero and * update digest accordingly. */ @@ -806,20 +806,20 @@ static int kexec_calculate_store_digests(struct kimag= e *image) while (nullsz) { unsigned long bytes =3D nullsz; =20 if (bytes > zero_buf_sz) bytes =3D zero_buf_sz; - sha256_update(&state, zero_buf, bytes); + sha256_update(&sctx, zero_buf, bytes); nullsz -=3D bytes; } =20 sha_regions[j].start =3D ksegment->mem; sha_regions[j].len =3D ksegment->memsz; j++; } =20 - sha256_final(&state, digest); + sha256_final(&sctx, digest); =20 ret =3D kexec_purgatory_get_set_symbol(image, "purgatory_sha_regions", sha_regions, sha_region_sz, 0); if (ret) goto out_free_sha_regions; diff --git a/lib/crypto/sha256.c b/lib/crypto/sha256.c index ccaae70880166..3e7797a4489de 100644 --- a/lib/crypto/sha256.c +++ b/lib/crypto/sha256.c @@ -16,10 +16,24 @@ #include #include #include #include =20 +static const struct sha256_block_state sha224_iv =3D { + .h =3D { + SHA224_H0, SHA224_H1, SHA224_H2, SHA224_H3, + SHA224_H4, SHA224_H5, SHA224_H6, SHA224_H7, + }, +}; + +static const struct sha256_block_state sha256_iv =3D { + .h =3D { + SHA256_H0, SHA256_H1, SHA256_H2, SHA256_H3, + SHA256_H4, SHA256_H5, SHA256_H6, SHA256_H7, + }, +}; + /* * If __DISABLE_EXPORTS is defined, then this file is being compiled for a * pre-boot environment. In that case, ignore the kconfig options, pull t= he * generic code into the same translation unit, and use that only. */ @@ -30,65 +44,97 @@ static inline bool sha256_purgatory(void) { return __is_defined(__DISABLE_EXPORTS); } =20 -static inline void sha256_blocks(u32 state[SHA256_STATE_WORDS], const u8 *= data, - size_t nblocks) +static inline void sha256_blocks(struct sha256_block_state *state, + const u8 *data, size_t nblocks) +{ + sha256_choose_blocks(state->h, data, nblocks, sha256_purgatory(), false); +} + +static void __sha256_init(struct __sha256_ctx *ctx, + const struct sha256_block_state *iv, + u64 initial_bytecount) +{ + ctx->state =3D *iv; + ctx->bytecount =3D initial_bytecount; +} + +void sha224_init(struct sha224_ctx *ctx) +{ + __sha256_init(&ctx->ctx, &sha224_iv, 0); +} +EXPORT_SYMBOL_GPL(sha224_init); + +void sha256_init(struct sha256_ctx *ctx) { - sha256_choose_blocks(state, data, nblocks, sha256_purgatory(), false); + __sha256_init(&ctx->ctx, &sha256_iv, 0); } +EXPORT_SYMBOL_GPL(sha256_init); =20 -void sha256_update(struct sha256_state *sctx, const u8 *data, size_t len) +void __sha256_update(struct __sha256_ctx *ctx, const u8 *data, size_t len) { - size_t partial =3D sctx->count % SHA256_BLOCK_SIZE; + size_t partial =3D ctx->bytecount % SHA256_BLOCK_SIZE; =20 - sctx->count +=3D len; - BLOCK_HASH_UPDATE_BLOCKS(sha256_blocks, sctx->ctx.state, data, len, - SHA256_BLOCK_SIZE, sctx->buf, partial); + ctx->bytecount +=3D len; + BLOCK_HASH_UPDATE_BLOCKS(sha256_blocks, &ctx->state, data, len, + SHA256_BLOCK_SIZE, ctx->buf, partial); } -EXPORT_SYMBOL(sha256_update); +EXPORT_SYMBOL(__sha256_update); =20 -static inline void __sha256_final(struct sha256_state *sctx, u8 *out, - size_t digest_size) +static void __sha256_final(struct __sha256_ctx *ctx, + u8 *out, size_t digest_size) { - size_t partial =3D sctx->count % SHA256_BLOCK_SIZE; + u64 bitcount =3D ctx->bytecount << 3; + size_t partial =3D ctx->bytecount % SHA256_BLOCK_SIZE; + + ctx->buf[partial++] =3D 0x80; + if (partial > SHA256_BLOCK_SIZE - 8) { + memset(&ctx->buf[partial], 0, SHA256_BLOCK_SIZE - partial); + sha256_blocks(&ctx->state, ctx->buf, 1); + partial =3D 0; + } + memset(&ctx->buf[partial], 0, SHA256_BLOCK_SIZE - 8 - partial); + *(__be64 *)&ctx->buf[SHA256_BLOCK_SIZE - 8] =3D cpu_to_be64(bitcount); + sha256_blocks(&ctx->state, ctx->buf, 1); =20 - sha256_finup(&sctx->ctx, sctx->buf, partial, out, digest_size, - sha256_purgatory(), false); - memzero_explicit(sctx, sizeof(*sctx)); + for (size_t i =3D 0; i < digest_size; i +=3D 4) + put_unaligned_be32(ctx->state.h[i / 4], out + i); } =20 -void sha224_final(struct sha256_state *sctx, u8 out[SHA224_DIGEST_SIZE]) +void sha224_final(struct sha224_ctx *ctx, u8 out[SHA224_DIGEST_SIZE]) { - __sha256_final(sctx, out, SHA224_DIGEST_SIZE); + __sha256_final(&ctx->ctx, out, SHA224_DIGEST_SIZE); + memzero_explicit(ctx, sizeof(*ctx)); } EXPORT_SYMBOL(sha224_final); =20 -void sha256_final(struct sha256_state *sctx, u8 out[SHA256_DIGEST_SIZE]) +void sha256_final(struct sha256_ctx *ctx, u8 out[SHA256_DIGEST_SIZE]) { - __sha256_final(sctx, out, SHA256_DIGEST_SIZE); + __sha256_final(&ctx->ctx, out, SHA256_DIGEST_SIZE); + memzero_explicit(ctx, sizeof(*ctx)); } EXPORT_SYMBOL(sha256_final); =20 void sha224(const u8 *data, size_t len, u8 out[SHA224_DIGEST_SIZE]) { - struct sha256_state sctx; + struct sha224_ctx ctx; =20 - sha224_init(&sctx); - sha224_update(&sctx, data, len); - sha224_final(&sctx, out); + sha224_init(&ctx); + sha224_update(&ctx, data, len); + sha224_final(&ctx, out); } EXPORT_SYMBOL(sha224); =20 void sha256(const u8 *data, size_t len, u8 out[SHA256_DIGEST_SIZE]) { - struct sha256_state sctx; + struct sha256_ctx ctx; =20 - sha256_init(&sctx); - sha256_update(&sctx, data, len); - sha256_final(&sctx, out); + sha256_init(&ctx); + sha256_update(&ctx, data, len); + sha256_final(&ctx, out); } EXPORT_SYMBOL(sha256); =20 MODULE_DESCRIPTION("SHA-256 Algorithm"); MODULE_LICENSE("GPL"); --=20 2.50.0 From nobody Sun Sep 7 11:34:13 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 708C42BDC3D; Mon, 30 Jun 2025 16:09:24 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751299764; cv=none; b=Hi39zs5UyDy5cmiWqNpShbDeicHlqj/VmyT5NsVCPa6h+B4lM7q/+U8FspauE2e6HZ2oULrK/t5PAKKBuO4RKeKpVbXtpZMo9S0V9kvEIKMQmiZUebm2AwF8AX+3asPf/Xjyh+plTezXvsRJiK/bAkrBoWc7UHTGO0ron4C69UE= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751299764; c=relaxed/simple; bh=efA+GKjUU+dJBfSOtRlsOSdIVA7dpLoqkdFFxQ9hafY=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=E6FNNvUryLX4XfDXp6zBVVQZgloCDiBcCqg8oXriD8t9yMULa7DH6OCKkoNvSc7G9TYgxuO3QyuS643NaWpVrUVJsibwKVmr4inq4SmmS0RuaNV/yM7pyCAa97ajJDk30F9nOCNBCv/xsIGTShFryjXp2VV3I1QMWAWBWweq5ho= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=VFsvTo3W; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="VFsvTo3W" Received: by smtp.kernel.org (Postfix) with ESMTPSA id B4514C4CEF2; Mon, 30 Jun 2025 16:09:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1751299764; bh=efA+GKjUU+dJBfSOtRlsOSdIVA7dpLoqkdFFxQ9hafY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=VFsvTo3WILvMQpHd+yvvsFjwF1nkjL4SABX/m6m4jRbLRUDpkUZ65baJWQEbTAY8q BC651fUP9hJJGIgmt+qDICBAhcYpmkc8z0WllWHVMNLkgH78gULm5K68gRyHnoeW91 PqlppPQfoBY0IcrdKbXUDpxs4Ty9sI66Q0SmZ8v6Sk7fJGC/aEhFH47hWgSjPxKIKV iqFwf2hu6SAs4XcoKT3EgkOrS51oiMWijrGql6rtRXyCqiDw5zncRc5xqtaaev/eb3 f9q/M7V/K0yoIaxDUOyez61A58OXHd8B0Mcvn32tYuKyQgKWFFZaLEdp3Ni3hACfBg qiqQQFQg/ypOg== From: Eric Biggers To: linux-crypto@vger.kernel.org Cc: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, sparclinux@vger.kernel.org, x86@kernel.org, Ard Biesheuvel , "Jason A . Donenfeld" , Eric Biggers Subject: [PATCH v2 07/14] lib/crypto: sha256: Propagate sha256_block_state type to implementations Date: Mon, 30 Jun 2025 09:06:38 -0700 Message-ID: <20250630160645.3198-8-ebiggers@kernel.org> X-Mailer: git-send-email 2.50.0 In-Reply-To: <20250630160645.3198-1-ebiggers@kernel.org> References: <20250630160645.3198-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The previous commit made the SHA-256 compression function state be strongly typed, but it wasn't propagated all the way down to the implementations of it. Do that now. Signed-off-by: Eric Biggers Acked-by: Ard Biesheuvel --- .../mips/cavium-octeon/crypto/octeon-sha256.c | 2 +- include/crypto/internal/sha2.h | 8 +++---- lib/crypto/arm/sha256-ce.S | 2 +- lib/crypto/arm/sha256.c | 8 +++---- lib/crypto/arm64/sha256-ce.S | 2 +- lib/crypto/arm64/sha256.c | 8 +++---- lib/crypto/powerpc/sha256.c | 2 +- .../sha256-riscv64-zvknha_or_zvknhb-zvkb.S | 2 +- lib/crypto/riscv/sha256.c | 7 +++--- lib/crypto/s390/sha256.c | 2 +- lib/crypto/sha256-generic.c | 24 ++++++++++++++----- lib/crypto/sparc/sha256.c | 4 ++-- lib/crypto/x86/sha256-avx-asm.S | 2 +- lib/crypto/x86/sha256-avx2-asm.S | 2 +- lib/crypto/x86/sha256-ni-asm.S | 2 +- lib/crypto/x86/sha256-ssse3-asm.S | 2 +- lib/crypto/x86/sha256.c | 10 ++++---- 17 files changed, 51 insertions(+), 38 deletions(-) diff --git a/arch/mips/cavium-octeon/crypto/octeon-sha256.c b/arch/mips/cav= ium-octeon/crypto/octeon-sha256.c index c20038239cb6b..f8664818d04ec 100644 --- a/arch/mips/cavium-octeon/crypto/octeon-sha256.c +++ b/arch/mips/cavium-octeon/crypto/octeon-sha256.c @@ -20,11 +20,11 @@ =20 /* * We pass everything as 64-bit. OCTEON can handle misaligned data. */ =20 -void sha256_blocks_arch(u32 state[SHA256_STATE_WORDS], +void sha256_blocks_arch(struct sha256_block_state *state, const u8 *data, size_t nblocks) { struct octeon_cop2_state cop2_state; u64 *state64 =3D (u64 *)state; unsigned long flags; diff --git a/include/crypto/internal/sha2.h b/include/crypto/internal/sha2.h index 5a25ccc493886..f0f455477bbd7 100644 --- a/include/crypto/internal/sha2.h +++ b/include/crypto/internal/sha2.h @@ -15,23 +15,23 @@ bool sha256_is_arch_optimized(void); static inline bool sha256_is_arch_optimized(void) { return false; } #endif -void sha256_blocks_generic(u32 state[SHA256_STATE_WORDS], +void sha256_blocks_generic(struct sha256_block_state *state, const u8 *data, size_t nblocks); -void sha256_blocks_arch(u32 state[SHA256_STATE_WORDS], +void sha256_blocks_arch(struct sha256_block_state *state, const u8 *data, size_t nblocks); =20 static __always_inline void sha256_choose_blocks( u32 state[SHA256_STATE_WORDS], const u8 *data, size_t nblocks, bool force_generic, bool force_simd) { if (!IS_ENABLED(CONFIG_CRYPTO_ARCH_HAVE_LIB_SHA256) || force_generic) - sha256_blocks_generic(state, data, nblocks); + sha256_blocks_generic((struct sha256_block_state *)state, data, nblocks); else - sha256_blocks_arch(state, data, nblocks); + sha256_blocks_arch((struct sha256_block_state *)state, data, nblocks); } =20 static __always_inline void sha256_finup( struct crypto_sha256_state *sctx, u8 buf[SHA256_BLOCK_SIZE], size_t len, u8 out[SHA256_DIGEST_SIZE], size_t digest_size, diff --git a/lib/crypto/arm/sha256-ce.S b/lib/crypto/arm/sha256-ce.S index ac2c9b01b22d2..7481ac8e6c0d9 100644 --- a/lib/crypto/arm/sha256-ce.S +++ b/lib/crypto/arm/sha256-ce.S @@ -65,11 +65,11 @@ .word 0x391c0cb3, 0x4ed8aa4a, 0x5b9cca4f, 0x682e6ff3 .word 0x748f82ee, 0x78a5636f, 0x84c87814, 0x8cc70208 .word 0x90befffa, 0xa4506ceb, 0xbef9a3f7, 0xc67178f2 =20 /* - * void sha256_ce_transform(u32 state[SHA256_STATE_WORDS], + * void sha256_ce_transform(struct sha256_block_state *state, * const u8 *data, size_t nblocks); */ ENTRY(sha256_ce_transform) /* load state */ vld1.32 {dga-dgb}, [r0] diff --git a/lib/crypto/arm/sha256.c b/lib/crypto/arm/sha256.c index 2c9cfdaaa0691..7d90823586952 100644 --- a/lib/crypto/arm/sha256.c +++ b/lib/crypto/arm/sha256.c @@ -8,21 +8,21 @@ #include #include #include #include =20 -asmlinkage void sha256_block_data_order(u32 state[SHA256_STATE_WORDS], +asmlinkage void sha256_block_data_order(struct sha256_block_state *state, const u8 *data, size_t nblocks); -asmlinkage void sha256_block_data_order_neon(u32 state[SHA256_STATE_WORDS], +asmlinkage void sha256_block_data_order_neon(struct sha256_block_state *st= ate, const u8 *data, size_t nblocks); -asmlinkage void sha256_ce_transform(u32 state[SHA256_STATE_WORDS], +asmlinkage void sha256_ce_transform(struct sha256_block_state *state, const u8 *data, size_t nblocks); =20 static __ro_after_init DEFINE_STATIC_KEY_FALSE(have_neon); static __ro_after_init DEFINE_STATIC_KEY_FALSE(have_ce); =20 -void sha256_blocks_arch(u32 state[SHA256_STATE_WORDS], +void sha256_blocks_arch(struct sha256_block_state *state, const u8 *data, size_t nblocks) { if (IS_ENABLED(CONFIG_KERNEL_MODE_NEON) && static_branch_likely(&have_neon) && crypto_simd_usable()) { kernel_neon_begin(); diff --git a/lib/crypto/arm64/sha256-ce.S b/lib/crypto/arm64/sha256-ce.S index f3e21c6d87d2e..b99d9589c4217 100644 --- a/lib/crypto/arm64/sha256-ce.S +++ b/lib/crypto/arm64/sha256-ce.S @@ -69,11 +69,11 @@ .word 0x391c0cb3, 0x4ed8aa4a, 0x5b9cca4f, 0x682e6ff3 .word 0x748f82ee, 0x78a5636f, 0x84c87814, 0x8cc70208 .word 0x90befffa, 0xa4506ceb, 0xbef9a3f7, 0xc67178f2 =20 /* - * size_t __sha256_ce_transform(u32 state[SHA256_STATE_WORDS], + * size_t __sha256_ce_transform(struct sha256_block_state *state, * const u8 *data, size_t nblocks); */ .text SYM_FUNC_START(__sha256_ce_transform) /* load round constants */ diff --git a/lib/crypto/arm64/sha256.c b/lib/crypto/arm64/sha256.c index fb9bff40357be..609ffb8151987 100644 --- a/lib/crypto/arm64/sha256.c +++ b/lib/crypto/arm64/sha256.c @@ -8,21 +8,21 @@ #include #include #include #include =20 -asmlinkage void sha256_block_data_order(u32 state[SHA256_STATE_WORDS], +asmlinkage void sha256_block_data_order(struct sha256_block_state *state, const u8 *data, size_t nblocks); -asmlinkage void sha256_block_neon(u32 state[SHA256_STATE_WORDS], +asmlinkage void sha256_block_neon(struct sha256_block_state *state, const u8 *data, size_t nblocks); -asmlinkage size_t __sha256_ce_transform(u32 state[SHA256_STATE_WORDS], +asmlinkage size_t __sha256_ce_transform(struct sha256_block_state *state, const u8 *data, size_t nblocks); =20 static __ro_after_init DEFINE_STATIC_KEY_FALSE(have_neon); static __ro_after_init DEFINE_STATIC_KEY_FALSE(have_ce); =20 -void sha256_blocks_arch(u32 state[SHA256_STATE_WORDS], +void sha256_blocks_arch(struct sha256_block_state *state, const u8 *data, size_t nblocks) { if (IS_ENABLED(CONFIG_KERNEL_MODE_NEON) && static_branch_likely(&have_neon) && crypto_simd_usable()) { if (static_branch_likely(&have_ce)) { diff --git a/lib/crypto/powerpc/sha256.c b/lib/crypto/powerpc/sha256.c index 6b0f079587eb6..c3f844ae0aceb 100644 --- a/lib/crypto/powerpc/sha256.c +++ b/lib/crypto/powerpc/sha256.c @@ -40,11 +40,11 @@ static void spe_end(void) disable_kernel_spe(); /* reenable preemption */ preempt_enable(); } =20 -void sha256_blocks_arch(u32 state[SHA256_STATE_WORDS], +void sha256_blocks_arch(struct sha256_block_state *state, const u8 *data, size_t nblocks) { do { /* cut input data into smaller blocks */ u32 unit =3D min_t(size_t, nblocks, diff --git a/lib/crypto/riscv/sha256-riscv64-zvknha_or_zvknhb-zvkb.S b/lib/= crypto/riscv/sha256-riscv64-zvknha_or_zvknhb-zvkb.S index fad501ad06171..1618d1220a6e7 100644 --- a/lib/crypto/riscv/sha256-riscv64-zvknha_or_zvknhb-zvkb.S +++ b/lib/crypto/riscv/sha256-riscv64-zvknha_or_zvknhb-zvkb.S @@ -104,11 +104,11 @@ sha256_4rounds \last, \k1, W1, W2, W3, W0 sha256_4rounds \last, \k2, W2, W3, W0, W1 sha256_4rounds \last, \k3, W3, W0, W1, W2 .endm =20 -// void sha256_transform_zvknha_or_zvknhb_zvkb(u32 state[SHA256_STATE_WORD= S], +// void sha256_transform_zvknha_or_zvknhb_zvkb(struct sha256_block_state *= state, // const u8 *data, size_t nblocks); SYM_FUNC_START(sha256_transform_zvknha_or_zvknhb_zvkb) =20 // Load the round constants into K0-K15. vsetivli zero, 4, e32, m1, ta, ma diff --git a/lib/crypto/riscv/sha256.c b/lib/crypto/riscv/sha256.c index aa77349d08f30..a2079aa3ae925 100644 --- a/lib/crypto/riscv/sha256.c +++ b/lib/crypto/riscv/sha256.c @@ -13,16 +13,17 @@ #include #include #include #include =20 -asmlinkage void sha256_transform_zvknha_or_zvknhb_zvkb( - u32 state[SHA256_STATE_WORDS], const u8 *data, size_t nblocks); +asmlinkage void +sha256_transform_zvknha_or_zvknhb_zvkb(struct sha256_block_state *state, + const u8 *data, size_t nblocks); =20 static __ro_after_init DEFINE_STATIC_KEY_FALSE(have_extensions); =20 -void sha256_blocks_arch(u32 state[SHA256_STATE_WORDS], +void sha256_blocks_arch(struct sha256_block_state *state, const u8 *data, size_t nblocks) { if (static_branch_likely(&have_extensions) && crypto_simd_usable()) { kernel_vector_begin(); sha256_transform_zvknha_or_zvknhb_zvkb(state, data, nblocks); diff --git a/lib/crypto/s390/sha256.c b/lib/crypto/s390/sha256.c index 7dfe120fafaba..fb565718f7539 100644 --- a/lib/crypto/s390/sha256.c +++ b/lib/crypto/s390/sha256.c @@ -10,11 +10,11 @@ #include #include =20 static __ro_after_init DEFINE_STATIC_KEY_FALSE(have_cpacf_sha256); =20 -void sha256_blocks_arch(u32 state[SHA256_STATE_WORDS], +void sha256_blocks_arch(struct sha256_block_state *state, const u8 *data, size_t nblocks) { if (static_branch_likely(&have_cpacf_sha256)) cpacf_kimd(CPACF_KIMD_SHA_256, state, data, nblocks * SHA256_BLOCK_SIZE); diff --git a/lib/crypto/sha256-generic.c b/lib/crypto/sha256-generic.c index 2968d95d04038..99f904033c261 100644 --- a/lib/crypto/sha256-generic.c +++ b/lib/crypto/sha256-generic.c @@ -68,11 +68,11 @@ static inline void BLEND_OP(int I, u32 *W) t2 =3D e0(a) + Maj(a, b, c); \ d +=3D t1; \ h =3D t1 + t2; \ } while (0) =20 -static void sha256_block_generic(u32 state[SHA256_STATE_WORDS], +static void sha256_block_generic(struct sha256_block_state *state, const u8 *input, u32 W[64]) { u32 a, b, c, d, e, f, g, h; int i; =20 @@ -99,12 +99,18 @@ static void sha256_block_generic(u32 state[SHA256_STATE= _WORDS], BLEND_OP(i + 6, W); BLEND_OP(i + 7, W); } =20 /* load the state into our registers */ - a =3D state[0]; b =3D state[1]; c =3D state[2]; d =3D state[3]; - e =3D state[4]; f =3D state[5]; g =3D state[6]; h =3D state[7]; + a =3D state->h[0]; + b =3D state->h[1]; + c =3D state->h[2]; + d =3D state->h[3]; + e =3D state->h[4]; + f =3D state->h[5]; + g =3D state->h[6]; + h =3D state->h[7]; =20 /* now iterate */ for (i =3D 0; i < 64; i +=3D 8) { SHA256_ROUND(i + 0, a, b, c, d, e, f, g, h); SHA256_ROUND(i + 1, h, a, b, c, d, e, f, g); @@ -114,15 +120,21 @@ static void sha256_block_generic(u32 state[SHA256_STA= TE_WORDS], SHA256_ROUND(i + 5, d, e, f, g, h, a, b, c); SHA256_ROUND(i + 6, c, d, e, f, g, h, a, b); SHA256_ROUND(i + 7, b, c, d, e, f, g, h, a); } =20 - state[0] +=3D a; state[1] +=3D b; state[2] +=3D c; state[3] +=3D d; - state[4] +=3D e; state[5] +=3D f; state[6] +=3D g; state[7] +=3D h; + state->h[0] +=3D a; + state->h[1] +=3D b; + state->h[2] +=3D c; + state->h[3] +=3D d; + state->h[4] +=3D e; + state->h[5] +=3D f; + state->h[6] +=3D g; + state->h[7] +=3D h; } =20 -void sha256_blocks_generic(u32 state[SHA256_STATE_WORDS], +void sha256_blocks_generic(struct sha256_block_state *state, const u8 *data, size_t nblocks) { u32 W[64]; =20 do { diff --git a/lib/crypto/sparc/sha256.c b/lib/crypto/sparc/sha256.c index 8bdec2db08b30..060664b88a6d3 100644 --- a/lib/crypto/sparc/sha256.c +++ b/lib/crypto/sparc/sha256.c @@ -17,14 +17,14 @@ #include #include =20 static __ro_after_init DEFINE_STATIC_KEY_FALSE(have_sha256_opcodes); =20 -asmlinkage void sha256_sparc64_transform(u32 state[SHA256_STATE_WORDS], +asmlinkage void sha256_sparc64_transform(struct sha256_block_state *state, const u8 *data, size_t nblocks); =20 -void sha256_blocks_arch(u32 state[SHA256_STATE_WORDS], +void sha256_blocks_arch(struct sha256_block_state *state, const u8 *data, size_t nblocks) { if (static_branch_likely(&have_sha256_opcodes)) sha256_sparc64_transform(state, data, nblocks); else diff --git a/lib/crypto/x86/sha256-avx-asm.S b/lib/crypto/x86/sha256-avx-as= m.S index 0d7b2c3e45d9a..73bcff2b548f4 100644 --- a/lib/crypto/x86/sha256-avx-asm.S +++ b/lib/crypto/x86/sha256-avx-asm.S @@ -339,11 +339,11 @@ a =3D TMP_ add y0, h # h =3D h + S1 + CH + k + w + S0 += MAJ ROTATE_ARGS .endm =20 ######################################################################## -## void sha256_transform_avx(u32 state[SHA256_STATE_WORDS], +## void sha256_transform_avx(struct sha256_block_state *state, ## const u8 *data, size_t nblocks); ######################################################################## .text SYM_FUNC_START(sha256_transform_avx) ANNOTATE_NOENDBR # since this is called only via static_call diff --git a/lib/crypto/x86/sha256-avx2-asm.S b/lib/crypto/x86/sha256-avx2-= asm.S index 25d3380321ec3..45787570387f2 100644 --- a/lib/crypto/x86/sha256-avx2-asm.S +++ b/lib/crypto/x86/sha256-avx2-asm.S @@ -516,11 +516,11 @@ STACK_SIZE =3D _CTX + _CTX_SIZE ROTATE_ARGS =20 .endm =20 ######################################################################## -## void sha256_transform_rorx(u32 state[SHA256_STATE_WORDS], +## void sha256_transform_rorx(struct sha256_block_state *state, ## const u8 *data, size_t nblocks); ######################################################################## .text SYM_FUNC_START(sha256_transform_rorx) ANNOTATE_NOENDBR # since this is called only via static_call diff --git a/lib/crypto/x86/sha256-ni-asm.S b/lib/crypto/x86/sha256-ni-asm.S index d3548206cf3d4..4af7d22e29e47 100644 --- a/lib/crypto/x86/sha256-ni-asm.S +++ b/lib/crypto/x86/sha256-ni-asm.S @@ -104,11 +104,11 @@ * input data, and the number of 64-byte blocks to process. Once all bloc= ks * have been processed, the state is updated with the new state. This fun= ction * only processes complete blocks. State initialization, buffering of par= tial * blocks, and digest finalization is expected to be handled elsewhere. * - * void sha256_ni_transform(u32 state[SHA256_STATE_WORDS], + * void sha256_ni_transform(struct sha256_block_state *state, * const u8 *data, size_t nblocks); */ .text SYM_FUNC_START(sha256_ni_transform) ANNOTATE_NOENDBR # since this is called only via static_call diff --git a/lib/crypto/x86/sha256-ssse3-asm.S b/lib/crypto/x86/sha256-ssse= 3-asm.S index 7f24a4cdcb257..407b30adcd37f 100644 --- a/lib/crypto/x86/sha256-ssse3-asm.S +++ b/lib/crypto/x86/sha256-ssse3-asm.S @@ -346,11 +346,11 @@ a =3D TMP_ add y0, h # h =3D h + S1 + CH + k + w + S0 + MAJ ROTATE_ARGS .endm =20 ######################################################################## -## void sha256_transform_ssse3(u32 state[SHA256_STATE_WORDS], +## void sha256_transform_ssse3(struct sha256_block_state *state, ## const u8 *data, size_t nblocks); ######################################################################## .text SYM_FUNC_START(sha256_transform_ssse3) ANNOTATE_NOENDBR # since this is called only via static_call diff --git a/lib/crypto/x86/sha256.c b/lib/crypto/x86/sha256.c index baba74d7d26f2..cbb45defbefab 100644 --- a/lib/crypto/x86/sha256.c +++ b/lib/crypto/x86/sha256.c @@ -9,24 +9,24 @@ #include #include #include #include =20 -asmlinkage void sha256_transform_ssse3(u32 state[SHA256_STATE_WORDS], +asmlinkage void sha256_transform_ssse3(struct sha256_block_state *state, const u8 *data, size_t nblocks); -asmlinkage void sha256_transform_avx(u32 state[SHA256_STATE_WORDS], +asmlinkage void sha256_transform_avx(struct sha256_block_state *state, const u8 *data, size_t nblocks); -asmlinkage void sha256_transform_rorx(u32 state[SHA256_STATE_WORDS], +asmlinkage void sha256_transform_rorx(struct sha256_block_state *state, const u8 *data, size_t nblocks); -asmlinkage void sha256_ni_transform(u32 state[SHA256_STATE_WORDS], +asmlinkage void sha256_ni_transform(struct sha256_block_state *state, const u8 *data, size_t nblocks); =20 static __ro_after_init DEFINE_STATIC_KEY_FALSE(have_sha256_x86); =20 DEFINE_STATIC_CALL(sha256_blocks_x86, sha256_transform_ssse3); =20 -void sha256_blocks_arch(u32 state[SHA256_STATE_WORDS], +void sha256_blocks_arch(struct sha256_block_state *state, const u8 *data, size_t nblocks) { if (static_branch_likely(&have_sha256_x86) && crypto_simd_usable()) { kernel_fpu_begin(); static_call(sha256_blocks_x86)(state, data, nblocks); --=20 2.50.0 From nobody Sun Sep 7 11:34:13 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 0C09528FFDB; Mon, 30 Jun 2025 16:09:24 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751299765; cv=none; b=P0FmYB/FzcssrqKEXXeGma/zZsjuRnB1yJkJpeXMoVkGX1JMnDS8ZK9q5wRyIE9Jj2nMMa2S3ofZjDkZ23LNIKHnXX5owyniL7Kq+Ie24o4n89J4f3aXfBcDNCZZ1WNRnxiEKLaDACdfuJSJbBtpzyL3yDMi5CafzGSrBDePs78= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751299765; c=relaxed/simple; bh=6+1ubXJ5b0bOzmaWleh5feSsXbrBTAhB9wekRcwvDtA=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=ooc3btTTVDxmIl50eNm8Jge2tZvStoymibJhGMUpC8lAEFzjed98pfwh3eVu3XS/am9eVJJjpQOzKwWfPezVnxZRfhoFFLLvXZVNv/lVDyv4Ccrc2TIS0pih8fZ8KWfderNDiQlQv7n4v3sM2D27cj+IhC9+/NgRmoayS0xCeSQ= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=pKdLPUZ4; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="pKdLPUZ4" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3EA45C4CEE3; Mon, 30 Jun 2025 16:09:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1751299764; bh=6+1ubXJ5b0bOzmaWleh5feSsXbrBTAhB9wekRcwvDtA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=pKdLPUZ4iPSy5ucV76BIQubqoZ6TZ+DcSGMDHnJnEXHNML2skEmjhhdsULpfgmiv7 cFCvviTJAakdR+OMH5ifszQIAWtXClFC+iqTOyhYtqIfU66w0U3zgN88hAw1mQ82fJ zEHfMCE58dT4TPc0HIYunE4CyxuB44hWsq2uEUBUd5fHHQ45d19GdtZQkvV+VhHwK5 VEI9nWc/C214nFeUOAy0bGW196Bd4dI/dVa1BQ555QJaxBORFySgjmrDtJMM9L8mDD HeAdVHWSR09McZMw4X4ypJ0jsnUcmTvko+k06fHPIHbki+4wM2k7SKlJ0NDAYSrU3c ztOWrKhWVF4jQ== From: Eric Biggers To: linux-crypto@vger.kernel.org Cc: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, sparclinux@vger.kernel.org, x86@kernel.org, Ard Biesheuvel , "Jason A . Donenfeld" , Eric Biggers Subject: [PATCH v2 08/14] lib/crypto: sha256: Add HMAC-SHA224 and HMAC-SHA256 support Date: Mon, 30 Jun 2025 09:06:39 -0700 Message-ID: <20250630160645.3198-9-ebiggers@kernel.org> X-Mailer: git-send-email 2.50.0 In-Reply-To: <20250630160645.3198-1-ebiggers@kernel.org> References: <20250630160645.3198-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Since HMAC support is commonly needed and is fairly simple, include it as a first-class citizen of the SHA-256 library. The API supports both incremental and one-shot computation, and either preparing the key ahead of time or just using a raw key. The implementation is much more streamlined than crypto/hmac.c. I've kept it consistent with the HMAC-SHA384 and HMAC-SHA512 code as much as possible. Testing of these functions will be via sha224_kunit and sha256_kunit, added by a later commit. Signed-off-by: Eric Biggers Acked-by: Ard Biesheuvel --- include/crypto/sha2.h | 222 ++++++++++++++++++++++++++++++++++++++++++ lib/crypto/sha256.c | 147 +++++++++++++++++++++++++++- 2 files changed, 364 insertions(+), 5 deletions(-) diff --git a/include/crypto/sha2.h b/include/crypto/sha2.h index 18e1eec841b71..2e3fc2cf4aa0d 100644 --- a/include/crypto/sha2.h +++ b/include/crypto/sha2.h @@ -129,10 +129,26 @@ struct __sha256_ctx { u64 bytecount; u8 buf[SHA256_BLOCK_SIZE] __aligned(__alignof__(__be64)); }; void __sha256_update(struct __sha256_ctx *ctx, const u8 *data, size_t len); =20 +/* + * HMAC key and message context structs, shared by HMAC-SHA224 and HMAC-SH= A256. + * The hmac_sha224_* and hmac_sha256_* structs wrap this one so that the A= PI has + * proper typing and doesn't allow mixing the functions arbitrarily. + */ +struct __hmac_sha256_key { + struct sha256_block_state istate; + struct sha256_block_state ostate; +}; +struct __hmac_sha256_ctx { + struct __sha256_ctx sha_ctx; + struct sha256_block_state ostate; +}; +void __hmac_sha256_init(struct __hmac_sha256_ctx *ctx, + const struct __hmac_sha256_key *key); + /** * struct sha224_ctx - Context for hashing a message with SHA-224 * @ctx: private */ struct sha224_ctx { @@ -146,10 +162,113 @@ static inline void sha224_update(struct sha224_ctx *= ctx, __sha256_update(&ctx->ctx, data, len); } void sha224_final(struct sha224_ctx *ctx, u8 out[SHA224_DIGEST_SIZE]); void sha224(const u8 *data, size_t len, u8 out[SHA224_DIGEST_SIZE]); =20 +/** + * struct hmac_sha224_key - Prepared key for HMAC-SHA224 + * @key: private + */ +struct hmac_sha224_key { + struct __hmac_sha256_key key; +}; + +/** + * struct hmac_sha224_ctx - Context for computing HMAC-SHA224 of a message + * @ctx: private + */ +struct hmac_sha224_ctx { + struct __hmac_sha256_ctx ctx; +}; + +/** + * hmac_sha224_preparekey() - Prepare a key for HMAC-SHA224 + * @key: (output) the key structure to initialize + * @raw_key: the raw HMAC-SHA224 key + * @raw_key_len: the key length in bytes. All key lengths are supported. + * + * Note: the caller is responsible for zeroizing both the struct hmac_sha2= 24_key + * and the raw key once they are no longer needed. + * + * Context: Any context. + */ +void hmac_sha224_preparekey(struct hmac_sha224_key *key, + const u8 *raw_key, size_t raw_key_len); + +/** + * hmac_sha224_init() - Initialize an HMAC-SHA224 context for a new message + * @ctx: (output) the HMAC context to initialize + * @key: the prepared HMAC key + * + * If you don't need incremental computation, consider hmac_sha224() inste= ad. + * + * Context: Any context. + */ +static inline void hmac_sha224_init(struct hmac_sha224_ctx *ctx, + const struct hmac_sha224_key *key) +{ + __hmac_sha256_init(&ctx->ctx, &key->key); +} + +/** + * hmac_sha224_update() - Update an HMAC-SHA224 context with message data + * @ctx: the HMAC context to update; must have been initialized + * @data: the message data + * @data_len: the data length in bytes + * + * This can be called any number of times. + * + * Context: Any context. + */ +static inline void hmac_sha224_update(struct hmac_sha224_ctx *ctx, + const u8 *data, size_t data_len) +{ + __sha256_update(&ctx->ctx.sha_ctx, data, data_len); +} + +/** + * hmac_sha224_final() - Finish computing an HMAC-SHA224 value + * @ctx: the HMAC context to finalize; must have been initialized + * @out: (output) the resulting HMAC-SHA224 value + * + * After finishing, this zeroizes @ctx. So the caller does not need to do= it. + * + * Context: Any context. + */ +void hmac_sha224_final(struct hmac_sha224_ctx *ctx, u8 out[SHA224_DIGEST_S= IZE]); + +/** + * hmac_sha224() - Compute HMAC-SHA224 in one shot, using a prepared key + * @key: the prepared HMAC key + * @data: the message data + * @data_len: the data length in bytes + * @out: (output) the resulting HMAC-SHA224 value + * + * If you're using the key only once, consider using hmac_sha224_usingrawk= ey(). + * + * Context: Any context. + */ +void hmac_sha224(const struct hmac_sha224_key *key, + const u8 *data, size_t data_len, u8 out[SHA224_DIGEST_SIZE]); + +/** + * hmac_sha224_usingrawkey() - Compute HMAC-SHA224 in one shot, using a ra= w key + * @raw_key: the raw HMAC-SHA224 key + * @raw_key_len: the key length in bytes. All key lengths are supported. + * @data: the message data + * @data_len: the data length in bytes + * @out: (output) the resulting HMAC-SHA224 value + * + * If you're using the key multiple times, prefer to use + * hmac_sha224_preparekey() followed by multiple calls to hmac_sha224() in= stead. + * + * Context: Any context. + */ +void hmac_sha224_usingrawkey(const u8 *raw_key, size_t raw_key_len, + const u8 *data, size_t data_len, + u8 out[SHA224_DIGEST_SIZE]); + /** * struct sha256_ctx - Context for hashing a message with SHA-256 * @ctx: private */ struct sha256_ctx { @@ -163,10 +282,113 @@ static inline void sha256_update(struct sha256_ctx *= ctx, __sha256_update(&ctx->ctx, data, len); } void sha256_final(struct sha256_ctx *ctx, u8 out[SHA256_DIGEST_SIZE]); void sha256(const u8 *data, size_t len, u8 out[SHA256_DIGEST_SIZE]); =20 +/** + * struct hmac_sha256_key - Prepared key for HMAC-SHA256 + * @key: private + */ +struct hmac_sha256_key { + struct __hmac_sha256_key key; +}; + +/** + * struct hmac_sha256_ctx - Context for computing HMAC-SHA256 of a message + * @ctx: private + */ +struct hmac_sha256_ctx { + struct __hmac_sha256_ctx ctx; +}; + +/** + * hmac_sha256_preparekey() - Prepare a key for HMAC-SHA256 + * @key: (output) the key structure to initialize + * @raw_key: the raw HMAC-SHA256 key + * @raw_key_len: the key length in bytes. All key lengths are supported. + * + * Note: the caller is responsible for zeroizing both the struct hmac_sha2= 56_key + * and the raw key once they are no longer needed. + * + * Context: Any context. + */ +void hmac_sha256_preparekey(struct hmac_sha256_key *key, + const u8 *raw_key, size_t raw_key_len); + +/** + * hmac_sha256_init() - Initialize an HMAC-SHA256 context for a new message + * @ctx: (output) the HMAC context to initialize + * @key: the prepared HMAC key + * + * If you don't need incremental computation, consider hmac_sha256() inste= ad. + * + * Context: Any context. + */ +static inline void hmac_sha256_init(struct hmac_sha256_ctx *ctx, + const struct hmac_sha256_key *key) +{ + __hmac_sha256_init(&ctx->ctx, &key->key); +} + +/** + * hmac_sha256_update() - Update an HMAC-SHA256 context with message data + * @ctx: the HMAC context to update; must have been initialized + * @data: the message data + * @data_len: the data length in bytes + * + * This can be called any number of times. + * + * Context: Any context. + */ +static inline void hmac_sha256_update(struct hmac_sha256_ctx *ctx, + const u8 *data, size_t data_len) +{ + __sha256_update(&ctx->ctx.sha_ctx, data, data_len); +} + +/** + * hmac_sha256_final() - Finish computing an HMAC-SHA256 value + * @ctx: the HMAC context to finalize; must have been initialized + * @out: (output) the resulting HMAC-SHA256 value + * + * After finishing, this zeroizes @ctx. So the caller does not need to do= it. + * + * Context: Any context. + */ +void hmac_sha256_final(struct hmac_sha256_ctx *ctx, u8 out[SHA256_DIGEST_S= IZE]); + +/** + * hmac_sha256() - Compute HMAC-SHA256 in one shot, using a prepared key + * @key: the prepared HMAC key + * @data: the message data + * @data_len: the data length in bytes + * @out: (output) the resulting HMAC-SHA256 value + * + * If you're using the key only once, consider using hmac_sha256_usingrawk= ey(). + * + * Context: Any context. + */ +void hmac_sha256(const struct hmac_sha256_key *key, + const u8 *data, size_t data_len, u8 out[SHA256_DIGEST_SIZE]); + +/** + * hmac_sha256_usingrawkey() - Compute HMAC-SHA256 in one shot, using a ra= w key + * @raw_key: the raw HMAC-SHA256 key + * @raw_key_len: the key length in bytes. All key lengths are supported. + * @data: the message data + * @data_len: the data length in bytes + * @out: (output) the resulting HMAC-SHA256 value + * + * If you're using the key multiple times, prefer to use + * hmac_sha256_preparekey() followed by multiple calls to hmac_sha256() in= stead. + * + * Context: Any context. + */ +void hmac_sha256_usingrawkey(const u8 *raw_key, size_t raw_key_len, + const u8 *data, size_t data_len, + u8 out[SHA256_DIGEST_SIZE]); + /* State for the SHA-512 (and SHA-384) compression function */ struct sha512_block_state { u64 h[8]; }; =20 diff --git a/lib/crypto/sha256.c b/lib/crypto/sha256.c index 3e7797a4489de..12b4b59052c4a 100644 --- a/lib/crypto/sha256.c +++ b/lib/crypto/sha256.c @@ -1,24 +1,23 @@ // SPDX-License-Identifier: GPL-2.0-or-later /* - * SHA-256, as specified in - * http://csrc.nist.gov/groups/STM/cavp/documents/shs/sha256-384-512.pdf - * - * SHA-256 code by Jean-Luc Cooke . + * SHA-224, SHA-256, HMAC-SHA224, and HMAC-SHA256 library functions * * Copyright (c) Jean-Luc Cooke * Copyright (c) Andrew McDonald * Copyright (c) 2002 James Morris * Copyright (c) 2014 Red Hat Inc. */ =20 +#include #include #include #include #include #include #include +#include =20 static const struct sha256_block_state sha224_iv =3D { .h =3D { SHA224_H0, SHA224_H1, SHA224_H2, SHA224_H3, SHA224_H4, SHA224_H5, SHA224_H6, SHA224_H7, @@ -134,7 +133,145 @@ void sha256(const u8 *data, size_t len, u8 out[SHA256= _DIGEST_SIZE]) sha256_update(&ctx, data, len); sha256_final(&ctx, out); } EXPORT_SYMBOL(sha256); =20 -MODULE_DESCRIPTION("SHA-256 Algorithm"); +/* pre-boot environment (as indicated by __DISABLE_EXPORTS) doesn't need H= MAC */ +#ifndef __DISABLE_EXPORTS +static void __hmac_sha256_preparekey(struct __hmac_sha256_key *key, + const u8 *raw_key, size_t raw_key_len, + const struct sha256_block_state *iv) +{ + union { + u8 b[SHA256_BLOCK_SIZE]; + unsigned long w[SHA256_BLOCK_SIZE / sizeof(unsigned long)]; + } derived_key =3D { 0 }; + + if (unlikely(raw_key_len > SHA256_BLOCK_SIZE)) { + if (iv =3D=3D &sha224_iv) + sha224(raw_key, raw_key_len, derived_key.b); + else + sha256(raw_key, raw_key_len, derived_key.b); + } else { + memcpy(derived_key.b, raw_key, raw_key_len); + } + + for (size_t i =3D 0; i < ARRAY_SIZE(derived_key.w); i++) + derived_key.w[i] ^=3D REPEAT_BYTE(HMAC_IPAD_VALUE); + key->istate =3D *iv; + sha256_blocks(&key->istate, derived_key.b, 1); + + for (size_t i =3D 0; i < ARRAY_SIZE(derived_key.w); i++) + derived_key.w[i] ^=3D REPEAT_BYTE(HMAC_OPAD_VALUE ^ + HMAC_IPAD_VALUE); + key->ostate =3D *iv; + sha256_blocks(&key->ostate, derived_key.b, 1); + + memzero_explicit(&derived_key, sizeof(derived_key)); +} + +void hmac_sha224_preparekey(struct hmac_sha224_key *key, + const u8 *raw_key, size_t raw_key_len) +{ + __hmac_sha256_preparekey(&key->key, raw_key, raw_key_len, &sha224_iv); +} +EXPORT_SYMBOL_GPL(hmac_sha224_preparekey); + +void hmac_sha256_preparekey(struct hmac_sha256_key *key, + const u8 *raw_key, size_t raw_key_len) +{ + __hmac_sha256_preparekey(&key->key, raw_key, raw_key_len, &sha256_iv); +} +EXPORT_SYMBOL_GPL(hmac_sha256_preparekey); + +void __hmac_sha256_init(struct __hmac_sha256_ctx *ctx, + const struct __hmac_sha256_key *key) +{ + __sha256_init(&ctx->sha_ctx, &key->istate, SHA256_BLOCK_SIZE); + ctx->ostate =3D key->ostate; +} +EXPORT_SYMBOL_GPL(__hmac_sha256_init); + +static void __hmac_sha256_final(struct __hmac_sha256_ctx *ctx, + u8 *out, size_t digest_size) +{ + /* Generate the padded input for the outer hash in ctx->sha_ctx.buf. */ + __sha256_final(&ctx->sha_ctx, ctx->sha_ctx.buf, digest_size); + memset(&ctx->sha_ctx.buf[digest_size], 0, + SHA256_BLOCK_SIZE - digest_size); + ctx->sha_ctx.buf[digest_size] =3D 0x80; + *(__be32 *)&ctx->sha_ctx.buf[SHA256_BLOCK_SIZE - 4] =3D + cpu_to_be32(8 * (SHA256_BLOCK_SIZE + digest_size)); + + /* Compute the outer hash, which gives the HMAC value. */ + sha256_blocks(&ctx->ostate, ctx->sha_ctx.buf, 1); + for (size_t i =3D 0; i < digest_size; i +=3D 4) + put_unaligned_be32(ctx->ostate.h[i / 4], out + i); + + memzero_explicit(ctx, sizeof(*ctx)); +} + +void hmac_sha224_final(struct hmac_sha224_ctx *ctx, + u8 out[SHA224_DIGEST_SIZE]) +{ + __hmac_sha256_final(&ctx->ctx, out, SHA224_DIGEST_SIZE); +} +EXPORT_SYMBOL_GPL(hmac_sha224_final); + +void hmac_sha256_final(struct hmac_sha256_ctx *ctx, + u8 out[SHA256_DIGEST_SIZE]) +{ + __hmac_sha256_final(&ctx->ctx, out, SHA256_DIGEST_SIZE); +} +EXPORT_SYMBOL_GPL(hmac_sha256_final); + +void hmac_sha224(const struct hmac_sha224_key *key, + const u8 *data, size_t data_len, u8 out[SHA224_DIGEST_SIZE]) +{ + struct hmac_sha224_ctx ctx; + + hmac_sha224_init(&ctx, key); + hmac_sha224_update(&ctx, data, data_len); + hmac_sha224_final(&ctx, out); +} +EXPORT_SYMBOL_GPL(hmac_sha224); + +void hmac_sha256(const struct hmac_sha256_key *key, + const u8 *data, size_t data_len, u8 out[SHA256_DIGEST_SIZE]) +{ + struct hmac_sha256_ctx ctx; + + hmac_sha256_init(&ctx, key); + hmac_sha256_update(&ctx, data, data_len); + hmac_sha256_final(&ctx, out); +} +EXPORT_SYMBOL_GPL(hmac_sha256); + +void hmac_sha224_usingrawkey(const u8 *raw_key, size_t raw_key_len, + const u8 *data, size_t data_len, + u8 out[SHA224_DIGEST_SIZE]) +{ + struct hmac_sha224_key key; + + hmac_sha224_preparekey(&key, raw_key, raw_key_len); + hmac_sha224(&key, data, data_len, out); + + memzero_explicit(&key, sizeof(key)); +} +EXPORT_SYMBOL_GPL(hmac_sha224_usingrawkey); + +void hmac_sha256_usingrawkey(const u8 *raw_key, size_t raw_key_len, + const u8 *data, size_t data_len, + u8 out[SHA256_DIGEST_SIZE]) +{ + struct hmac_sha256_key key; + + hmac_sha256_preparekey(&key, raw_key, raw_key_len); + hmac_sha256(&key, data, data_len, out); + + memzero_explicit(&key, sizeof(key)); +} +EXPORT_SYMBOL_GPL(hmac_sha256_usingrawkey); +#endif /* !__DISABLE_EXPORTS */ + +MODULE_DESCRIPTION("SHA-224, SHA-256, HMAC-SHA224, and HMAC-SHA256 library= functions"); MODULE_LICENSE("GPL"); --=20 2.50.0 From nobody Sun Sep 7 11:34:13 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 509112C324A; Mon, 30 Jun 2025 16:09:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751299765; cv=none; b=IF+7n+43sEYi4DDWbPTsSLTusTfNYtPrON5tcQHvZbZRltGIsxW5zsfZojxanAIAK3zbWiZBIFT1JgzY6Z14ydLTCs9Sjc2FWTQBqiE5/jwPOusooIKqNpM6vi5mR+XiSfwQ+/xVwtXOa9bbNAVSbhDHqPSTh6c1Qvqm4SDlfWw= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751299765; c=relaxed/simple; bh=Ud+50EJ47z//1Sh4DtoqV/NouXDVpnK8LBayY4mJ1Js=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=iVK4CdUKpa4W0FO6BBExTG/ufM//u+pLS/uVtCLQyqUpXj5RLW9qpBmCvFtaM5Rnw/sfOOq+8nsaXQDwJc+uoN2AEGSFPwahHF31RiGlXuAdVmW/27CbNg+h55irxdGazYKgCTKEsLLf/OeZkCvPk40UtXVXvTD9wuMSMyaJkK4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=I5eZbGfV; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="I5eZbGfV" Received: by smtp.kernel.org (Postfix) with ESMTPSA id BCDE1C4CEEF; Mon, 30 Jun 2025 16:09:24 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1751299765; bh=Ud+50EJ47z//1Sh4DtoqV/NouXDVpnK8LBayY4mJ1Js=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=I5eZbGfVIdqM8VSB4vxsWdg/hhm2gqnIuTKLnWxHTViVK5O62yJZb+8Ll7ftuwQca bbaMJouNDjFwzUJ56t8nS1nSwdF/F3KOKuyGfgW2XhUG5Ga+Pft0hTmTnaCmPGjJVg O2/axBgmGvoB7ONJJdB9Oe82DM7S3E+0YHxKf+wsf/S+UyZ7KkwpKxMq4A/nOGQQfl nS1VexUjoUtElRLI8yTuSD+lGbqNCEVajIrtHdRCT7tQnC/iTa2wavXk/L9t/35jV6 pbuUsePg7cjAwrssCk0G4hhXbXJvAgw6iIlMmZOUuoel0NnfS/hbFjnSLSUHGZn9ni TZSOo7zXFqExA== From: Eric Biggers To: linux-crypto@vger.kernel.org Cc: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, sparclinux@vger.kernel.org, x86@kernel.org, Ard Biesheuvel , "Jason A . Donenfeld" , Eric Biggers Subject: [PATCH v2 09/14] crypto: sha256 - Wrap library and add HMAC support Date: Mon, 30 Jun 2025 09:06:40 -0700 Message-ID: <20250630160645.3198-10-ebiggers@kernel.org> X-Mailer: git-send-email 2.50.0 In-Reply-To: <20250630160645.3198-1-ebiggers@kernel.org> References: <20250630160645.3198-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Like I did for crypto/sha512.c, rework crypto/sha256.c to simply wrap the normal library functions instead of accessing the low-level arch- optimized and generic block functions directly. Also add support for HMAC-SHA224 and HMAC-SHA256, again just wrapping the library functions. Since the replacement crypto_shash algorithms are implemented using the (potentially arch-optimized) library functions, give them driver names ending with "-lib" rather than "-generic". Update crypto/testmgr.c and a couple odd drivers to take this change in driver name into account. Besides the above cases which are accounted for, there are no known cases where the driver names were being depended on. There is potential for confusion for people manually checking /proc/crypto (e.g. https://lore.kernel.org/r/9e33c893-2466-4d4e-afb1-966334e451a2@linux.ibm.co= m/), but really people just need to get used to the driver name not being meaningful for the software algorithms. Historically, the optimized code was disabled by default, so there was some purpose to checking whether it was enabled or not. However, this is now fixed for all SHA-2 algorithms, and the library code just always does the right thing. E.g. if the CPU supports SHA-256 instructions, they are used. This change does also mean that the generic partial block handling code in crypto/shash.c, which got added in 6.16, no longer gets used. But that's fine; the library has to implement the partial block handling anyway, and it's better to do it in the library since the block size and other properties of the algorithm are all fixed at compile time there, resulting in more streamlined code. Signed-off-by: Eric Biggers Acked-by: Ard Biesheuvel --- crypto/Kconfig | 4 +- crypto/Makefile | 1 - crypto/sha256.c | 286 ++++++++++++-------------- crypto/testmgr.c | 12 ++ drivers/crypto/img-hash.c | 4 +- drivers/crypto/starfive/jh7110-hash.c | 8 +- 6 files changed, 148 insertions(+), 167 deletions(-) diff --git a/crypto/Kconfig b/crypto/Kconfig index cb40a9b469722..3ea1397214e02 100644 --- a/crypto/Kconfig +++ b/crypto/Kconfig @@ -990,13 +990,13 @@ config CRYPTO_SHA1 =20 config CRYPTO_SHA256 tristate "SHA-224 and SHA-256" select CRYPTO_HASH select CRYPTO_LIB_SHA256 - select CRYPTO_LIB_SHA256_GENERIC help - SHA-224 and SHA-256 secure hash algorithms (FIPS 180, ISO/IEC 10118-3) + SHA-224 and SHA-256 secure hash algorithms (FIPS 180, ISO/IEC + 10118-3), including HMAC support. =20 This is required for IPsec AH (XFRM_AH) and IPsec ESP (XFRM_ESP). Used by the btrfs filesystem, Ceph, NFS, and SMB. =20 config CRYPTO_SHA512 diff --git a/crypto/Makefile b/crypto/Makefile index 271c77462cec9..5098fa6d5f39c 100644 --- a/crypto/Makefile +++ b/crypto/Makefile @@ -75,11 +75,10 @@ obj-$(CONFIG_CRYPTO_NULL) +=3D crypto_null.o obj-$(CONFIG_CRYPTO_MD4) +=3D md4.o obj-$(CONFIG_CRYPTO_MD5) +=3D md5.o obj-$(CONFIG_CRYPTO_RMD160) +=3D rmd160.o obj-$(CONFIG_CRYPTO_SHA1) +=3D sha1_generic.o obj-$(CONFIG_CRYPTO_SHA256) +=3D sha256.o -CFLAGS_sha256.o +=3D -DARCH=3D$(ARCH) obj-$(CONFIG_CRYPTO_SHA512) +=3D sha512.o obj-$(CONFIG_CRYPTO_SHA3) +=3D sha3_generic.o obj-$(CONFIG_CRYPTO_SM3_GENERIC) +=3D sm3_generic.o obj-$(CONFIG_CRYPTO_STREEBOG) +=3D streebog_generic.o obj-$(CONFIG_CRYPTO_WP512) +=3D wp512.o diff --git a/crypto/sha256.c b/crypto/sha256.c index 15c57fba256b7..d81166cbba953 100644 --- a/crypto/sha256.c +++ b/crypto/sha256.c @@ -1,283 +1,253 @@ // SPDX-License-Identifier: GPL-2.0-or-later /* - * Crypto API wrapper for the SHA-256 and SHA-224 library functions + * Crypto API support for SHA-224, SHA-256, HMAC-SHA224, and HMAC-SHA256 * * Copyright (c) Jean-Luc Cooke * Copyright (c) Andrew McDonald * Copyright (c) 2002 James Morris * SHA224 Support Copyright 2007 Intel Corporation + * Copyright 2025 Google LLC */ #include -#include +#include #include #include =20 +/* SHA-224 */ + const u8 sha224_zero_message_hash[SHA224_DIGEST_SIZE] =3D { 0xd1, 0x4a, 0x02, 0x8c, 0x2a, 0x3a, 0x2b, 0xc9, 0x47, 0x61, 0x02, 0xbb, 0x28, 0x82, 0x34, 0xc4, 0x15, 0xa2, 0xb0, 0x1f, 0x82, 0x8e, 0xa6, 0x2a, 0xc5, 0xb3, 0xe4, 0x2f }; EXPORT_SYMBOL_GPL(sha224_zero_message_hash); =20 +#define SHA224_CTX(desc) ((struct sha224_ctx *)shash_desc_ctx(desc)) + +static int crypto_sha224_init(struct shash_desc *desc) +{ + sha224_init(SHA224_CTX(desc)); + return 0; +} + +static int crypto_sha224_update(struct shash_desc *desc, + const u8 *data, unsigned int len) +{ + sha224_update(SHA224_CTX(desc), data, len); + return 0; +} + +static int crypto_sha224_final(struct shash_desc *desc, u8 *out) +{ + sha224_final(SHA224_CTX(desc), out); + return 0; +} + +static int crypto_sha224_digest(struct shash_desc *desc, + const u8 *data, unsigned int len, u8 *out) +{ + sha224(data, len, out); + return 0; +} + +/* SHA-256 */ + const u8 sha256_zero_message_hash[SHA256_DIGEST_SIZE] =3D { 0xe3, 0xb0, 0xc4, 0x42, 0x98, 0xfc, 0x1c, 0x14, 0x9a, 0xfb, 0xf4, 0xc8, 0x99, 0x6f, 0xb9, 0x24, 0x27, 0xae, 0x41, 0xe4, 0x64, 0x9b, 0x93, 0x4c, 0xa4, 0x95, 0x99, 0x1b, 0x78, 0x52, 0xb8, 0x55 }; EXPORT_SYMBOL_GPL(sha256_zero_message_hash); =20 +#define SHA256_CTX(desc) ((struct sha256_ctx *)shash_desc_ctx(desc)) + static int crypto_sha256_init(struct shash_desc *desc) { - sha256_block_init(shash_desc_ctx(desc)); + sha256_init(SHA256_CTX(desc)); return 0; } =20 -static inline int crypto_sha256_update(struct shash_desc *desc, const u8 *= data, - unsigned int len, bool force_generic) +static int crypto_sha256_update(struct shash_desc *desc, + const u8 *data, unsigned int len) { - struct crypto_sha256_state *sctx =3D shash_desc_ctx(desc); - int remain =3D len % SHA256_BLOCK_SIZE; - - sctx->count +=3D len - remain; - sha256_choose_blocks(sctx->state, data, len / SHA256_BLOCK_SIZE, - force_generic, !force_generic); - return remain; + sha256_update(SHA256_CTX(desc), data, len); + return 0; } =20 -static int crypto_sha256_update_generic(struct shash_desc *desc, const u8 = *data, - unsigned int len) +static int crypto_sha256_final(struct shash_desc *desc, u8 *out) { - return crypto_sha256_update(desc, data, len, true); + sha256_final(SHA256_CTX(desc), out); + return 0; } =20 -static int crypto_sha256_update_lib(struct shash_desc *desc, const u8 *dat= a, - unsigned int len) +static int crypto_sha256_digest(struct shash_desc *desc, + const u8 *data, unsigned int len, u8 *out) { - sha256_update(shash_desc_ctx(desc), data, len); + sha256(data, len, out); return 0; } =20 -static int crypto_sha256_update_arch(struct shash_desc *desc, const u8 *da= ta, - unsigned int len) -{ - return crypto_sha256_update(desc, data, len, false); -} +/* HMAC-SHA224 */ =20 -static int crypto_sha256_final_lib(struct shash_desc *desc, u8 *out) -{ - sha256_final(shash_desc_ctx(desc), out); - return 0; -} +#define HMAC_SHA224_KEY(tfm) ((struct hmac_sha224_key *)crypto_shash_ctx(t= fm)) +#define HMAC_SHA224_CTX(desc) ((struct hmac_sha224_ctx *)shash_desc_ctx(de= sc)) =20 -static __always_inline int crypto_sha256_finup(struct shash_desc *desc, - const u8 *data, - unsigned int len, u8 *out, - bool force_generic) +static int crypto_hmac_sha224_setkey(struct crypto_shash *tfm, + const u8 *raw_key, unsigned int keylen) { - struct crypto_sha256_state *sctx =3D shash_desc_ctx(desc); - unsigned int remain =3D len; - u8 *buf; - - if (len >=3D SHA256_BLOCK_SIZE) - remain =3D crypto_sha256_update(desc, data, len, force_generic); - sctx->count +=3D remain; - buf =3D memcpy(sctx + 1, data + len - remain, remain); - sha256_finup(sctx, buf, remain, out, - crypto_shash_digestsize(desc->tfm), force_generic, - !force_generic); + hmac_sha224_preparekey(HMAC_SHA224_KEY(tfm), raw_key, keylen); return 0; } =20 -static int crypto_sha256_finup_generic(struct shash_desc *desc, const u8 *= data, - unsigned int len, u8 *out) +static int crypto_hmac_sha224_init(struct shash_desc *desc) { - return crypto_sha256_finup(desc, data, len, out, true); + hmac_sha224_init(HMAC_SHA224_CTX(desc), HMAC_SHA224_KEY(desc->tfm)); + return 0; } =20 -static int crypto_sha256_finup_arch(struct shash_desc *desc, const u8 *dat= a, - unsigned int len, u8 *out) +static int crypto_hmac_sha224_update(struct shash_desc *desc, + const u8 *data, unsigned int len) { - return crypto_sha256_finup(desc, data, len, out, false); + hmac_sha224_update(HMAC_SHA224_CTX(desc), data, len); + return 0; } =20 -static int crypto_sha256_digest_generic(struct shash_desc *desc, const u8 = *data, - unsigned int len, u8 *out) +static int crypto_hmac_sha224_final(struct shash_desc *desc, u8 *out) { - crypto_sha256_init(desc); - return crypto_sha256_finup_generic(desc, data, len, out); + hmac_sha224_final(HMAC_SHA224_CTX(desc), out); + return 0; } =20 -static int crypto_sha256_digest_lib(struct shash_desc *desc, const u8 *dat= a, - unsigned int len, u8 *out) +static int crypto_hmac_sha224_digest(struct shash_desc *desc, + const u8 *data, unsigned int len, + u8 *out) { - sha256(data, len, out); + hmac_sha224(HMAC_SHA224_KEY(desc->tfm), data, len, out); return 0; } =20 -static int crypto_sha256_digest_arch(struct shash_desc *desc, const u8 *da= ta, - unsigned int len, u8 *out) +/* HMAC-SHA256 */ + +#define HMAC_SHA256_KEY(tfm) ((struct hmac_sha256_key *)crypto_shash_ctx(t= fm)) +#define HMAC_SHA256_CTX(desc) ((struct hmac_sha256_ctx *)shash_desc_ctx(de= sc)) + +static int crypto_hmac_sha256_setkey(struct crypto_shash *tfm, + const u8 *raw_key, unsigned int keylen) { - crypto_sha256_init(desc); - return crypto_sha256_finup_arch(desc, data, len, out); + hmac_sha256_preparekey(HMAC_SHA256_KEY(tfm), raw_key, keylen); + return 0; } =20 -static int crypto_sha224_init(struct shash_desc *desc) +static int crypto_hmac_sha256_init(struct shash_desc *desc) { - sha224_block_init(shash_desc_ctx(desc)); + hmac_sha256_init(HMAC_SHA256_CTX(desc), HMAC_SHA256_KEY(desc->tfm)); return 0; } =20 -static int crypto_sha224_final_lib(struct shash_desc *desc, u8 *out) +static int crypto_hmac_sha256_update(struct shash_desc *desc, + const u8 *data, unsigned int len) { - sha224_final(shash_desc_ctx(desc), out); + hmac_sha256_update(HMAC_SHA256_CTX(desc), data, len); return 0; } =20 -static int crypto_sha256_import_lib(struct shash_desc *desc, const void *i= n) +static int crypto_hmac_sha256_final(struct shash_desc *desc, u8 *out) { - struct __sha256_ctx *sctx =3D shash_desc_ctx(desc); - const u8 *p =3D in; - - memcpy(sctx, p, sizeof(*sctx)); - p +=3D sizeof(*sctx); - sctx->bytecount +=3D *p; + hmac_sha256_final(HMAC_SHA256_CTX(desc), out); return 0; } =20 -static int crypto_sha256_export_lib(struct shash_desc *desc, void *out) +static int crypto_hmac_sha256_digest(struct shash_desc *desc, + const u8 *data, unsigned int len, + u8 *out) { - struct __sha256_ctx *sctx0 =3D shash_desc_ctx(desc); - struct __sha256_ctx sctx =3D *sctx0; - unsigned int partial; - u8 *p =3D out; - - partial =3D sctx.bytecount % SHA256_BLOCK_SIZE; - sctx.bytecount -=3D partial; - memcpy(p, &sctx, sizeof(sctx)); - p +=3D sizeof(sctx); - *p =3D partial; + hmac_sha256(HMAC_SHA256_KEY(desc->tfm), data, len, out); return 0; } =20 +/* Algorithm definitions */ + static struct shash_alg algs[] =3D { - { - .base.cra_name =3D "sha256", - .base.cra_driver_name =3D "sha256-generic", - .base.cra_priority =3D 100, - .base.cra_flags =3D CRYPTO_AHASH_ALG_BLOCK_ONLY | - CRYPTO_AHASH_ALG_FINUP_MAX, - .base.cra_blocksize =3D SHA256_BLOCK_SIZE, - .base.cra_module =3D THIS_MODULE, - .digestsize =3D SHA256_DIGEST_SIZE, - .init =3D crypto_sha256_init, - .update =3D crypto_sha256_update_generic, - .finup =3D crypto_sha256_finup_generic, - .digest =3D crypto_sha256_digest_generic, - .descsize =3D sizeof(struct crypto_sha256_state), - }, { .base.cra_name =3D "sha224", - .base.cra_driver_name =3D "sha224-generic", - .base.cra_priority =3D 100, - .base.cra_flags =3D CRYPTO_AHASH_ALG_BLOCK_ONLY | - CRYPTO_AHASH_ALG_FINUP_MAX, + .base.cra_driver_name =3D "sha224-lib", + .base.cra_priority =3D 300, .base.cra_blocksize =3D SHA224_BLOCK_SIZE, .base.cra_module =3D THIS_MODULE, .digestsize =3D SHA224_DIGEST_SIZE, .init =3D crypto_sha224_init, - .update =3D crypto_sha256_update_generic, - .finup =3D crypto_sha256_finup_generic, - .descsize =3D sizeof(struct crypto_sha256_state), + .update =3D crypto_sha224_update, + .final =3D crypto_sha224_final, + .digest =3D crypto_sha224_digest, + .descsize =3D sizeof(struct sha224_ctx), }, { .base.cra_name =3D "sha256", .base.cra_driver_name =3D "sha256-lib", + .base.cra_priority =3D 300, .base.cra_blocksize =3D SHA256_BLOCK_SIZE, .base.cra_module =3D THIS_MODULE, .digestsize =3D SHA256_DIGEST_SIZE, .init =3D crypto_sha256_init, - .update =3D crypto_sha256_update_lib, - .final =3D crypto_sha256_final_lib, - .digest =3D crypto_sha256_digest_lib, + .update =3D crypto_sha256_update, + .final =3D crypto_sha256_final, + .digest =3D crypto_sha256_digest, .descsize =3D sizeof(struct sha256_ctx), - .statesize =3D sizeof(struct crypto_sha256_state) + - SHA256_BLOCK_SIZE + 1, - .import =3D crypto_sha256_import_lib, - .export =3D crypto_sha256_export_lib, }, { - .base.cra_name =3D "sha224", - .base.cra_driver_name =3D "sha224-lib", + .base.cra_name =3D "hmac(sha224)", + .base.cra_driver_name =3D "hmac-sha224-lib", + .base.cra_priority =3D 300, .base.cra_blocksize =3D SHA224_BLOCK_SIZE, + .base.cra_ctxsize =3D sizeof(struct hmac_sha224_key), .base.cra_module =3D THIS_MODULE, .digestsize =3D SHA224_DIGEST_SIZE, - .init =3D crypto_sha224_init, - .update =3D crypto_sha256_update_lib, - .final =3D crypto_sha224_final_lib, - .descsize =3D sizeof(struct sha224_ctx), - .statesize =3D sizeof(struct crypto_sha256_state) + - SHA256_BLOCK_SIZE + 1, - .import =3D crypto_sha256_import_lib, - .export =3D crypto_sha256_export_lib, + .setkey =3D crypto_hmac_sha224_setkey, + .init =3D crypto_hmac_sha224_init, + .update =3D crypto_hmac_sha224_update, + .final =3D crypto_hmac_sha224_final, + .digest =3D crypto_hmac_sha224_digest, + .descsize =3D sizeof(struct hmac_sha224_ctx), }, { - .base.cra_name =3D "sha256", - .base.cra_driver_name =3D "sha256-" __stringify(ARCH), + .base.cra_name =3D "hmac(sha256)", + .base.cra_driver_name =3D "hmac-sha256-lib", .base.cra_priority =3D 300, - .base.cra_flags =3D CRYPTO_AHASH_ALG_BLOCK_ONLY | - CRYPTO_AHASH_ALG_FINUP_MAX, .base.cra_blocksize =3D SHA256_BLOCK_SIZE, + .base.cra_ctxsize =3D sizeof(struct hmac_sha256_key), .base.cra_module =3D THIS_MODULE, .digestsize =3D SHA256_DIGEST_SIZE, - .init =3D crypto_sha256_init, - .update =3D crypto_sha256_update_arch, - .finup =3D crypto_sha256_finup_arch, - .digest =3D crypto_sha256_digest_arch, - .descsize =3D sizeof(struct crypto_sha256_state), - }, - { - .base.cra_name =3D "sha224", - .base.cra_driver_name =3D "sha224-" __stringify(ARCH), - .base.cra_priority =3D 300, - .base.cra_flags =3D CRYPTO_AHASH_ALG_BLOCK_ONLY | - CRYPTO_AHASH_ALG_FINUP_MAX, - .base.cra_blocksize =3D SHA224_BLOCK_SIZE, - .base.cra_module =3D THIS_MODULE, - .digestsize =3D SHA224_DIGEST_SIZE, - .init =3D crypto_sha224_init, - .update =3D crypto_sha256_update_arch, - .finup =3D crypto_sha256_finup_arch, - .descsize =3D sizeof(struct crypto_sha256_state), + .setkey =3D crypto_hmac_sha256_setkey, + .init =3D crypto_hmac_sha256_init, + .update =3D crypto_hmac_sha256_update, + .final =3D crypto_hmac_sha256_final, + .digest =3D crypto_hmac_sha256_digest, + .descsize =3D sizeof(struct hmac_sha256_ctx), }, }; =20 -static unsigned int num_algs; - static int __init crypto_sha256_mod_init(void) { - /* register the arch flavours only if they differ from generic */ - num_algs =3D ARRAY_SIZE(algs); - BUILD_BUG_ON(ARRAY_SIZE(algs) <=3D 2); - if (!sha256_is_arch_optimized()) - num_algs -=3D 2; return crypto_register_shashes(algs, ARRAY_SIZE(algs)); } module_init(crypto_sha256_mod_init); =20 static void __exit crypto_sha256_mod_exit(void) { - crypto_unregister_shashes(algs, num_algs); + crypto_unregister_shashes(algs, ARRAY_SIZE(algs)); } module_exit(crypto_sha256_mod_exit); =20 MODULE_LICENSE("GPL"); -MODULE_DESCRIPTION("Crypto API wrapper for the SHA-256 and SHA-224 library= functions"); +MODULE_DESCRIPTION("Crypto API support for SHA-224, SHA-256, HMAC-SHA224, = and HMAC-SHA256"); =20 -MODULE_ALIAS_CRYPTO("sha256"); -MODULE_ALIAS_CRYPTO("sha256-generic"); -MODULE_ALIAS_CRYPTO("sha256-" __stringify(ARCH)); MODULE_ALIAS_CRYPTO("sha224"); -MODULE_ALIAS_CRYPTO("sha224-generic"); -MODULE_ALIAS_CRYPTO("sha224-" __stringify(ARCH)); +MODULE_ALIAS_CRYPTO("sha224-lib"); +MODULE_ALIAS_CRYPTO("sha256"); +MODULE_ALIAS_CRYPTO("sha256-lib"); +MODULE_ALIAS_CRYPTO("hmac(sha224)"); +MODULE_ALIAS_CRYPTO("hmac-sha224-lib"); +MODULE_ALIAS_CRYPTO("hmac(sha256)"); +MODULE_ALIAS_CRYPTO("hmac-sha256-lib"); diff --git a/crypto/testmgr.c b/crypto/testmgr.c index 9d8b11ea4af7f..4e95567f7ed17 100644 --- a/crypto/testmgr.c +++ b/crypto/testmgr.c @@ -4268,45 +4268,51 @@ static const struct alg_test_desc alg_test_descs[] = =3D { .alg =3D "authenc(hmac(sha1),rfc3686(ctr(aes)))", .test =3D alg_test_null, .fips_allowed =3D 1, }, { .alg =3D "authenc(hmac(sha224),cbc(des))", + .generic_driver =3D "authenc(hmac-sha224-lib,cbc(des-generic))", .test =3D alg_test_aead, .suite =3D { .aead =3D __VECS(hmac_sha224_des_cbc_tv_temp) } }, { .alg =3D "authenc(hmac(sha224),cbc(des3_ede))", + .generic_driver =3D "authenc(hmac-sha224-lib,cbc(des3_ede-generic))", .test =3D alg_test_aead, .suite =3D { .aead =3D __VECS(hmac_sha224_des3_ede_cbc_tv_temp) } }, { .alg =3D "authenc(hmac(sha256),cbc(aes))", + .generic_driver =3D "authenc(hmac-sha256-lib,cbc(aes-generic))", .test =3D alg_test_aead, .fips_allowed =3D 1, .suite =3D { .aead =3D __VECS(hmac_sha256_aes_cbc_tv_temp) } }, { .alg =3D "authenc(hmac(sha256),cbc(des))", + .generic_driver =3D "authenc(hmac-sha256-lib,cbc(des-generic))", .test =3D alg_test_aead, .suite =3D { .aead =3D __VECS(hmac_sha256_des_cbc_tv_temp) } }, { .alg =3D "authenc(hmac(sha256),cbc(des3_ede))", + .generic_driver =3D "authenc(hmac-sha256-lib,cbc(des3_ede-generic))", .test =3D alg_test_aead, .suite =3D { .aead =3D __VECS(hmac_sha256_des3_ede_cbc_tv_temp) } }, { .alg =3D "authenc(hmac(sha256),ctr(aes))", .test =3D alg_test_null, .fips_allowed =3D 1, }, { .alg =3D "authenc(hmac(sha256),cts(cbc(aes)))", + .generic_driver =3D "authenc(hmac-sha256-lib,cts(cbc(aes-generic)))", .test =3D alg_test_aead, .suite =3D { .aead =3D __VECS(krb5_test_aes128_cts_hmac_sha256_128) } }, { @@ -5013,17 +5019,19 @@ static const struct alg_test_desc alg_test_descs[] = =3D { .suite =3D { .sig =3D __VECS(ecrdsa_tv_template) } }, { .alg =3D "essiv(authenc(hmac(sha256),cbc(aes)),sha256)", + .generic_driver =3D "essiv(authenc(hmac-sha256-lib,cbc(aes-generic)),sha= 256-lib)", .test =3D alg_test_aead, .fips_allowed =3D 1, .suite =3D { .aead =3D __VECS(essiv_hmac_sha256_aes_cbc_tv_temp) } }, { .alg =3D "essiv(cbc(aes),sha256)", + .generic_driver =3D "essiv(cbc(aes-generic),sha256-lib)", .test =3D alg_test_skcipher, .fips_allowed =3D 1, .suite =3D { .cipher =3D __VECS(essiv_aes_cbc_tv_template) } @@ -5119,17 +5127,19 @@ static const struct alg_test_desc alg_test_descs[] = =3D { .suite =3D { .hash =3D __VECS(hmac_sha1_tv_template) } }, { .alg =3D "hmac(sha224)", + .generic_driver =3D "hmac-sha224-lib", .test =3D alg_test_hash, .fips_allowed =3D 1, .suite =3D { .hash =3D __VECS(hmac_sha224_tv_template) } }, { .alg =3D "hmac(sha256)", + .generic_driver =3D "hmac-sha256-lib", .test =3D alg_test_hash, .fips_allowed =3D 1, .suite =3D { .hash =3D __VECS(hmac_sha256_tv_template) } @@ -5457,17 +5467,19 @@ static const struct alg_test_desc alg_test_descs[] = =3D { .suite =3D { .hash =3D __VECS(sha1_tv_template) } }, { .alg =3D "sha224", + .generic_driver =3D "sha224-lib", .test =3D alg_test_hash, .fips_allowed =3D 1, .suite =3D { .hash =3D __VECS(sha224_tv_template) } }, { .alg =3D "sha256", + .generic_driver =3D "sha256-lib", .test =3D alg_test_hash, .fips_allowed =3D 1, .suite =3D { .hash =3D __VECS(sha256_tv_template) } diff --git a/drivers/crypto/img-hash.c b/drivers/crypto/img-hash.c index e050f5ff5efb6..f312eb075feca 100644 --- a/drivers/crypto/img-hash.c +++ b/drivers/crypto/img-hash.c @@ -708,16 +708,16 @@ static int img_hash_cra_sha1_init(struct crypto_tfm *= tfm) return img_hash_cra_init(tfm, "sha1-generic"); } =20 static int img_hash_cra_sha224_init(struct crypto_tfm *tfm) { - return img_hash_cra_init(tfm, "sha224-generic"); + return img_hash_cra_init(tfm, "sha224-lib"); } =20 static int img_hash_cra_sha256_init(struct crypto_tfm *tfm) { - return img_hash_cra_init(tfm, "sha256-generic"); + return img_hash_cra_init(tfm, "sha256-lib"); } =20 static void img_hash_cra_exit(struct crypto_tfm *tfm) { struct img_hash_ctx *tctx =3D crypto_tfm_ctx(tfm); diff --git a/drivers/crypto/starfive/jh7110-hash.c b/drivers/crypto/starfiv= e/jh7110-hash.c index 4abbff07412ff..6cfe0238f615f 100644 --- a/drivers/crypto/starfive/jh7110-hash.c +++ b/drivers/crypto/starfive/jh7110-hash.c @@ -491,17 +491,17 @@ static int starfive_hash_setkey(struct crypto_ahash *= hash, return starfive_hash_long_setkey(ctx, key, keylen, alg_name); } =20 static int starfive_sha224_init_tfm(struct crypto_ahash *hash) { - return starfive_hash_init_tfm(hash, "sha224-generic", + return starfive_hash_init_tfm(hash, "sha224-lib", STARFIVE_HASH_SHA224, 0); } =20 static int starfive_sha256_init_tfm(struct crypto_ahash *hash) { - return starfive_hash_init_tfm(hash, "sha256-generic", + return starfive_hash_init_tfm(hash, "sha256-lib", STARFIVE_HASH_SHA256, 0); } =20 static int starfive_sha384_init_tfm(struct crypto_ahash *hash) { @@ -521,17 +521,17 @@ static int starfive_sm3_init_tfm(struct crypto_ahash = *hash) STARFIVE_HASH_SM3, 0); } =20 static int starfive_hmac_sha224_init_tfm(struct crypto_ahash *hash) { - return starfive_hash_init_tfm(hash, "hmac(sha224-generic)", + return starfive_hash_init_tfm(hash, "hmac-sha224-lib", STARFIVE_HASH_SHA224, 1); } =20 static int starfive_hmac_sha256_init_tfm(struct crypto_ahash *hash) { - return starfive_hash_init_tfm(hash, "hmac(sha256-generic)", + return starfive_hash_init_tfm(hash, "hmac-sha256-lib", STARFIVE_HASH_SHA256, 1); } =20 static int starfive_hmac_sha384_init_tfm(struct crypto_ahash *hash) { --=20 2.50.0 From nobody Sun Sep 7 11:34:13 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 190F92D3EE3; Mon, 30 Jun 2025 16:09:25 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751299766; cv=none; b=uDyqAKhZtDKDkPCEzcP2DN2To3QRMWLhkjqPNgsxw+HnJxpE6pxS15uxGzCmRnrmjWypJexQyaTqRYuN+BR2jpaWVcCsUz/T2Y15EKEvlWWl4U/Z+Tk8pkc+R5hPj5wP8gcuXSUaPfD2atpO3Uf/KBlUzeG27oUx470zRerXpjM= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751299766; c=relaxed/simple; bh=IkX8c809Bkwa5YKNS0n0XaW5ZPy/jAKfSUJXB4aJ5O4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=O9IiEnGmp8bu3JVMwuPIFZr7YfCYg0CzekMoCaLk/pL2xsSMkkrtn1WiQoCupFyURIfFT7/04r86XYGLOsjS60ARhX3ZaAtc+T/sNMfpWbcEqTPAGLZweViKM/k3Z6xGRIWhtQENuY0d/EZriSJD+mLf2AGVwGQusAEEJvZenn4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=hVQBIRMj; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="hVQBIRMj" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 4DA14C4CEF3; Mon, 30 Jun 2025 16:09:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1751299765; bh=IkX8c809Bkwa5YKNS0n0XaW5ZPy/jAKfSUJXB4aJ5O4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=hVQBIRMj89xj2I1q9MVzVBAjICgLzgrDPQGlkMEPYi6HkkfGwSqIKdUNwVzeBw43u pLMZD2xvvsNNH+fstFXvgHc0S/K+L2xVKEbMCCPCufoZ0/heTewIkqVByYK1ZWhTw0 Hy2bV8iB2xINx5piSLStSpNhwTM8MqnEaLYJw7KNKk6zF0ZAJp4VPXHfuIPzb8qigv gq/sjjO5QScDsTz4EmKmYti8u4FJjxfBDkmAGV/fHoJ21l17GLfNAP/9dbDQMNXKoT xIgK7YllgkWIiNyIBpu/iZEwvbi6yxZxIbsudqsZ7UWS67uj446XOxPtm5262mi0PL /HoSYTOI/uH3w== From: Eric Biggers To: linux-crypto@vger.kernel.org Cc: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, sparclinux@vger.kernel.org, x86@kernel.org, Ard Biesheuvel , "Jason A . Donenfeld" , Eric Biggers Subject: [PATCH v2 10/14] crypto: sha256 - Use same state format as legacy drivers Date: Mon, 30 Jun 2025 09:06:41 -0700 Message-ID: <20250630160645.3198-11-ebiggers@kernel.org> X-Mailer: git-send-email 2.50.0 In-Reply-To: <20250630160645.3198-1-ebiggers@kernel.org> References: <20250630160645.3198-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Make the export and import functions for the sha224, sha256, hmac(sha224), and hmac(sha256) shash algorithms use the same format as the padlock-sha and nx-sha256 drivers, as required by Herbert. Signed-off-by: Eric Biggers Acked-by: Ard Biesheuvel --- crypto/sha256.c | 95 +++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 95 insertions(+) diff --git a/crypto/sha256.c b/crypto/sha256.c index d81166cbba953..052806559f06c 100644 --- a/crypto/sha256.c +++ b/crypto/sha256.c @@ -11,10 +11,47 @@ #include #include #include #include =20 +/* + * Export and import functions. crypto_shash wants a particular format th= at + * matches that used by some legacy drivers. It currently is the same as = the + * library SHA context, except the value in bytecount must be block-aligne= d and + * the remainder must be stored in an extra u8 appended to the struct. + */ + +#define SHA256_SHASH_STATE_SIZE 105 +static_assert(offsetof(struct __sha256_ctx, state) =3D=3D 0); +static_assert(offsetof(struct __sha256_ctx, bytecount) =3D=3D 32); +static_assert(offsetof(struct __sha256_ctx, buf) =3D=3D 40); +static_assert(sizeof(struct __sha256_ctx) + 1 =3D=3D SHA256_SHASH_STATE_SI= ZE); + +static int __crypto_sha256_export(const struct __sha256_ctx *ctx0, void *o= ut) +{ + struct __sha256_ctx ctx =3D *ctx0; + unsigned int partial; + u8 *p =3D out; + + partial =3D ctx.bytecount % SHA256_BLOCK_SIZE; + ctx.bytecount -=3D partial; + memcpy(p, &ctx, sizeof(ctx)); + p +=3D sizeof(ctx); + *p =3D partial; + return 0; +} + +static int __crypto_sha256_import(struct __sha256_ctx *ctx, const void *in) +{ + const u8 *p =3D in; + + memcpy(ctx, p, sizeof(*ctx)); + p +=3D sizeof(*ctx); + ctx->bytecount +=3D *p; + return 0; +} + /* SHA-224 */ =20 const u8 sha224_zero_message_hash[SHA224_DIGEST_SIZE] =3D { 0xd1, 0x4a, 0x02, 0x8c, 0x2a, 0x3a, 0x2b, 0xc9, 0x47, 0x61, 0x02, 0xbb, 0x28, 0x82, 0x34, 0xc4, 0x15, 0xa2, @@ -49,10 +86,20 @@ static int crypto_sha224_digest(struct shash_desc *desc, { sha224(data, len, out); return 0; } =20 +static int crypto_sha224_export(struct shash_desc *desc, void *out) +{ + return __crypto_sha256_export(&SHA224_CTX(desc)->ctx, out); +} + +static int crypto_sha224_import(struct shash_desc *desc, const void *in) +{ + return __crypto_sha256_import(&SHA224_CTX(desc)->ctx, in); +} + /* SHA-256 */ =20 const u8 sha256_zero_message_hash[SHA256_DIGEST_SIZE] =3D { 0xe3, 0xb0, 0xc4, 0x42, 0x98, 0xfc, 0x1c, 0x14, 0x9a, 0xfb, 0xf4, 0xc8, 0x99, 0x6f, 0xb9, 0x24, @@ -87,10 +134,20 @@ static int crypto_sha256_digest(struct shash_desc *des= c, { sha256(data, len, out); return 0; } =20 +static int crypto_sha256_export(struct shash_desc *desc, void *out) +{ + return __crypto_sha256_export(&SHA256_CTX(desc)->ctx, out); +} + +static int crypto_sha256_import(struct shash_desc *desc, const void *in) +{ + return __crypto_sha256_import(&SHA256_CTX(desc)->ctx, in); +} + /* HMAC-SHA224 */ =20 #define HMAC_SHA224_KEY(tfm) ((struct hmac_sha224_key *)crypto_shash_ctx(t= fm)) #define HMAC_SHA224_CTX(desc) ((struct hmac_sha224_ctx *)shash_desc_ctx(de= sc)) =20 @@ -126,10 +183,23 @@ static int crypto_hmac_sha224_digest(struct shash_des= c *desc, { hmac_sha224(HMAC_SHA224_KEY(desc->tfm), data, len, out); return 0; } =20 +static int crypto_hmac_sha224_export(struct shash_desc *desc, void *out) +{ + return __crypto_sha256_export(&HMAC_SHA224_CTX(desc)->ctx.sha_ctx, out); +} + +static int crypto_hmac_sha224_import(struct shash_desc *desc, const void *= in) +{ + struct hmac_sha224_ctx *ctx =3D HMAC_SHA224_CTX(desc); + + ctx->ctx.ostate =3D HMAC_SHA224_KEY(desc->tfm)->key.ostate; + return __crypto_sha256_import(&ctx->ctx.sha_ctx, in); +} + /* HMAC-SHA256 */ =20 #define HMAC_SHA256_KEY(tfm) ((struct hmac_sha256_key *)crypto_shash_ctx(t= fm)) #define HMAC_SHA256_CTX(desc) ((struct hmac_sha256_ctx *)shash_desc_ctx(de= sc)) =20 @@ -165,10 +235,23 @@ static int crypto_hmac_sha256_digest(struct shash_des= c *desc, { hmac_sha256(HMAC_SHA256_KEY(desc->tfm), data, len, out); return 0; } =20 +static int crypto_hmac_sha256_export(struct shash_desc *desc, void *out) +{ + return __crypto_sha256_export(&HMAC_SHA256_CTX(desc)->ctx.sha_ctx, out); +} + +static int crypto_hmac_sha256_import(struct shash_desc *desc, const void *= in) +{ + struct hmac_sha256_ctx *ctx =3D HMAC_SHA256_CTX(desc); + + ctx->ctx.ostate =3D HMAC_SHA256_KEY(desc->tfm)->key.ostate; + return __crypto_sha256_import(&ctx->ctx.sha_ctx, in); +} + /* Algorithm definitions */ =20 static struct shash_alg algs[] =3D { { .base.cra_name =3D "sha224", @@ -179,11 +262,14 @@ static struct shash_alg algs[] =3D { .digestsize =3D SHA224_DIGEST_SIZE, .init =3D crypto_sha224_init, .update =3D crypto_sha224_update, .final =3D crypto_sha224_final, .digest =3D crypto_sha224_digest, + .export =3D crypto_sha224_export, + .import =3D crypto_sha224_import, .descsize =3D sizeof(struct sha224_ctx), + .statesize =3D SHA256_SHASH_STATE_SIZE, }, { .base.cra_name =3D "sha256", .base.cra_driver_name =3D "sha256-lib", .base.cra_priority =3D 300, @@ -192,11 +278,14 @@ static struct shash_alg algs[] =3D { .digestsize =3D SHA256_DIGEST_SIZE, .init =3D crypto_sha256_init, .update =3D crypto_sha256_update, .final =3D crypto_sha256_final, .digest =3D crypto_sha256_digest, + .export =3D crypto_sha256_export, + .import =3D crypto_sha256_import, .descsize =3D sizeof(struct sha256_ctx), + .statesize =3D SHA256_SHASH_STATE_SIZE, }, { .base.cra_name =3D "hmac(sha224)", .base.cra_driver_name =3D "hmac-sha224-lib", .base.cra_priority =3D 300, @@ -207,11 +296,14 @@ static struct shash_alg algs[] =3D { .setkey =3D crypto_hmac_sha224_setkey, .init =3D crypto_hmac_sha224_init, .update =3D crypto_hmac_sha224_update, .final =3D crypto_hmac_sha224_final, .digest =3D crypto_hmac_sha224_digest, + .export =3D crypto_hmac_sha224_export, + .import =3D crypto_hmac_sha224_import, .descsize =3D sizeof(struct hmac_sha224_ctx), + .statesize =3D SHA256_SHASH_STATE_SIZE, }, { .base.cra_name =3D "hmac(sha256)", .base.cra_driver_name =3D "hmac-sha256-lib", .base.cra_priority =3D 300, @@ -222,11 +314,14 @@ static struct shash_alg algs[] =3D { .setkey =3D crypto_hmac_sha256_setkey, .init =3D crypto_hmac_sha256_init, .update =3D crypto_hmac_sha256_update, .final =3D crypto_hmac_sha256_final, .digest =3D crypto_hmac_sha256_digest, + .export =3D crypto_hmac_sha256_export, + .import =3D crypto_hmac_sha256_import, .descsize =3D sizeof(struct hmac_sha256_ctx), + .statesize =3D SHA256_SHASH_STATE_SIZE, }, }; =20 static int __init crypto_sha256_mod_init(void) { --=20 2.50.0 From nobody Sun Sep 7 11:34:13 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 54FD82D4B4F; Mon, 30 Jun 2025 16:09:26 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751299766; cv=none; b=tm8MUoQ90kTH8PWzgYURt0uL9zww0NoWpapST3+79daOZ6eOJYjwaybF1tykj7HQOCuDguCs+5CRebTh1nYldgYtu36AaZ5mDJjsOWsKWDPAbO3Gh8VFTCS0XyV9WqfB2G+qtB4dGf3HEt6G6U39tFRLzcwOi2fA+74KGZ7/Ex0= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751299766; c=relaxed/simple; bh=qX6erliMaa6ulFNQWAym5UGYaX+QesAVdQAZCq+fGOg=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=R28lu2EyuUcs4f+YorwnywavXcMKWwqUPalsp13vgqdGZ0GGQ3iBmKp2DYtS6xAKk4YQhNmZFfOi87qQHLtRdvhzGCEgYFTnjGf4sLOg5ThHDGLxyh+Yq0ytXJ8nmrCkUbjE/xp3Enewp777lw9RJe3b3cmG2dpfICPQFvsSF0U= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=B4SBkLPs; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="B4SBkLPs" Received: by smtp.kernel.org (Postfix) with ESMTPSA id CB896C4CEF4; Mon, 30 Jun 2025 16:09:25 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1751299766; bh=qX6erliMaa6ulFNQWAym5UGYaX+QesAVdQAZCq+fGOg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=B4SBkLPsrDtMSle6gshLLQ/m4UzOebXqZJ7GZ8t9HmQa3kgB01qTNAxJ+E+5SI0tE DNGWpFZnQ42KfzK1RPVMUHDevxo25awdKlOgZ5M7V1/kMazAdbvyoeDHw6yEDYBjSb iZaBZ9D+PNTEt33b1OvxfM4ZesS/fBZeiGK025KQUGyFBVXu14Psn7CYuNbWmtzCjQ L6jJO1k4bEMhGecya7uc2MbbpyDHLgiATkvhMXmup1TwqKqZAdBd/RW12OdtAHwxLc Yo87PcBTc0x13h6KFgGAboUadOVjZ6H54Lpvg+8Bq/LQnkASFOe0y18gFsm1pvPLrz mzGLPZzFJjrFA== From: Eric Biggers To: linux-crypto@vger.kernel.org Cc: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, sparclinux@vger.kernel.org, x86@kernel.org, Ard Biesheuvel , "Jason A . Donenfeld" , Eric Biggers Subject: [PATCH v2 11/14] lib/crypto: sha256: Remove sha256_is_arch_optimized() Date: Mon, 30 Jun 2025 09:06:42 -0700 Message-ID: <20250630160645.3198-12-ebiggers@kernel.org> X-Mailer: git-send-email 2.50.0 In-Reply-To: <20250630160645.3198-1-ebiggers@kernel.org> References: <20250630160645.3198-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Remove sha256_is_arch_optimized(), since it is no longer used. Signed-off-by: Eric Biggers Acked-by: Ard Biesheuvel --- arch/mips/cavium-octeon/crypto/octeon-sha256.c | 6 ------ include/crypto/internal/sha2.h | 8 -------- lib/crypto/arm/sha256.c | 7 ------- lib/crypto/arm64/sha256.c | 7 ------- lib/crypto/powerpc/sha256.c | 6 ------ lib/crypto/riscv/sha256.c | 6 ------ lib/crypto/s390/sha256.c | 6 ------ lib/crypto/sparc/sha256.c | 6 ------ lib/crypto/x86/sha256.c | 6 ------ 9 files changed, 58 deletions(-) diff --git a/arch/mips/cavium-octeon/crypto/octeon-sha256.c b/arch/mips/cav= ium-octeon/crypto/octeon-sha256.c index f8664818d04ec..c7c67bdc2bd06 100644 --- a/arch/mips/cavium-octeon/crypto/octeon-sha256.c +++ b/arch/mips/cavium-octeon/crypto/octeon-sha256.c @@ -59,14 +59,8 @@ void sha256_blocks_arch(struct sha256_block_state *state, state64[3] =3D read_octeon_64bit_hash_dword(3); octeon_crypto_disable(&cop2_state, flags); } EXPORT_SYMBOL_GPL(sha256_blocks_arch); =20 -bool sha256_is_arch_optimized(void) -{ - return octeon_has_crypto(); -} -EXPORT_SYMBOL_GPL(sha256_is_arch_optimized); - MODULE_LICENSE("GPL"); MODULE_DESCRIPTION("SHA-256 Secure Hash Algorithm (OCTEON)"); MODULE_AUTHOR("Aaro Koskinen "); diff --git a/include/crypto/internal/sha2.h b/include/crypto/internal/sha2.h index f0f455477bbd7..7915a3a46bc86 100644 --- a/include/crypto/internal/sha2.h +++ b/include/crypto/internal/sha2.h @@ -7,18 +7,10 @@ #include #include #include #include =20 -#if IS_ENABLED(CONFIG_CRYPTO_ARCH_HAVE_LIB_SHA256) -bool sha256_is_arch_optimized(void); -#else -static inline bool sha256_is_arch_optimized(void) -{ - return false; -} -#endif void sha256_blocks_generic(struct sha256_block_state *state, const u8 *data, size_t nblocks); void sha256_blocks_arch(struct sha256_block_state *state, const u8 *data, size_t nblocks); =20 diff --git a/lib/crypto/arm/sha256.c b/lib/crypto/arm/sha256.c index 7d90823586952..27181be0aa92e 100644 --- a/lib/crypto/arm/sha256.c +++ b/lib/crypto/arm/sha256.c @@ -35,17 +35,10 @@ void sha256_blocks_arch(struct sha256_block_state *stat= e, sha256_block_data_order(state, data, nblocks); } } EXPORT_SYMBOL_GPL(sha256_blocks_arch); =20 -bool sha256_is_arch_optimized(void) -{ - /* We always can use at least the ARM scalar implementation. */ - return true; -} -EXPORT_SYMBOL_GPL(sha256_is_arch_optimized); - static int __init sha256_arm_mod_init(void) { if (IS_ENABLED(CONFIG_KERNEL_MODE_NEON) && (elf_hwcap & HWCAP_NEON)) { static_branch_enable(&have_neon); if (elf_hwcap2 & HWCAP2_SHA2) diff --git a/lib/crypto/arm64/sha256.c b/lib/crypto/arm64/sha256.c index 609ffb8151987..a5a4982767089 100644 --- a/lib/crypto/arm64/sha256.c +++ b/lib/crypto/arm64/sha256.c @@ -45,17 +45,10 @@ void sha256_blocks_arch(struct sha256_block_state *stat= e, sha256_block_data_order(state, data, nblocks); } } EXPORT_SYMBOL_GPL(sha256_blocks_arch); =20 -bool sha256_is_arch_optimized(void) -{ - /* We always can use at least the ARM64 scalar implementation. */ - return true; -} -EXPORT_SYMBOL_GPL(sha256_is_arch_optimized); - static int __init sha256_arm64_mod_init(void) { if (IS_ENABLED(CONFIG_KERNEL_MODE_NEON) && cpu_have_named_feature(ASIMD)) { static_branch_enable(&have_neon); diff --git a/lib/crypto/powerpc/sha256.c b/lib/crypto/powerpc/sha256.c index c3f844ae0aceb..437e587b05754 100644 --- a/lib/crypto/powerpc/sha256.c +++ b/lib/crypto/powerpc/sha256.c @@ -58,13 +58,7 @@ void sha256_blocks_arch(struct sha256_block_state *state, nblocks -=3D unit; } while (nblocks); } EXPORT_SYMBOL_GPL(sha256_blocks_arch); =20 -bool sha256_is_arch_optimized(void) -{ - return true; -} -EXPORT_SYMBOL_GPL(sha256_is_arch_optimized); - MODULE_LICENSE("GPL"); MODULE_DESCRIPTION("SHA-256 Secure Hash Algorithm, SPE optimized"); diff --git a/lib/crypto/riscv/sha256.c b/lib/crypto/riscv/sha256.c index a2079aa3ae925..01004cb9c6e9e 100644 --- a/lib/crypto/riscv/sha256.c +++ b/lib/crypto/riscv/sha256.c @@ -32,16 +32,10 @@ void sha256_blocks_arch(struct sha256_block_state *stat= e, sha256_blocks_generic(state, data, nblocks); } } EXPORT_SYMBOL_GPL(sha256_blocks_arch); =20 -bool sha256_is_arch_optimized(void) -{ - return static_key_enabled(&have_extensions); -} -EXPORT_SYMBOL_GPL(sha256_is_arch_optimized); - static int __init riscv64_sha256_mod_init(void) { /* Both zvknha and zvknhb provide the SHA-256 instructions. */ if ((riscv_isa_extension_available(NULL, ZVKNHA) || riscv_isa_extension_available(NULL, ZVKNHB)) && diff --git a/lib/crypto/s390/sha256.c b/lib/crypto/s390/sha256.c index fb565718f7539..6ebfd35a5d44c 100644 --- a/lib/crypto/s390/sha256.c +++ b/lib/crypto/s390/sha256.c @@ -21,16 +21,10 @@ void sha256_blocks_arch(struct sha256_block_state *stat= e, else sha256_blocks_generic(state, data, nblocks); } EXPORT_SYMBOL_GPL(sha256_blocks_arch); =20 -bool sha256_is_arch_optimized(void) -{ - return static_key_enabled(&have_cpacf_sha256); -} -EXPORT_SYMBOL_GPL(sha256_is_arch_optimized); - static int __init sha256_s390_mod_init(void) { if (cpu_have_feature(S390_CPU_FEATURE_MSA) && cpacf_query_func(CPACF_KIMD, CPACF_KIMD_SHA_256)) static_branch_enable(&have_cpacf_sha256); diff --git a/lib/crypto/sparc/sha256.c b/lib/crypto/sparc/sha256.c index 060664b88a6d3..f41c109c1c18d 100644 --- a/lib/crypto/sparc/sha256.c +++ b/lib/crypto/sparc/sha256.c @@ -30,16 +30,10 @@ void sha256_blocks_arch(struct sha256_block_state *stat= e, else sha256_blocks_generic(state, data, nblocks); } EXPORT_SYMBOL_GPL(sha256_blocks_arch); =20 -bool sha256_is_arch_optimized(void) -{ - return static_key_enabled(&have_sha256_opcodes); -} -EXPORT_SYMBOL_GPL(sha256_is_arch_optimized); - static int __init sha256_sparc64_mod_init(void) { unsigned long cfr; =20 if (!(sparc64_elf_hwcap & HWCAP_SPARC_CRYPTO)) diff --git a/lib/crypto/x86/sha256.c b/lib/crypto/x86/sha256.c index cbb45defbefab..9ee38d2b3d572 100644 --- a/lib/crypto/x86/sha256.c +++ b/lib/crypto/x86/sha256.c @@ -35,16 +35,10 @@ void sha256_blocks_arch(struct sha256_block_state *stat= e, sha256_blocks_generic(state, data, nblocks); } } EXPORT_SYMBOL_GPL(sha256_blocks_arch); =20 -bool sha256_is_arch_optimized(void) -{ - return static_key_enabled(&have_sha256_x86); -} -EXPORT_SYMBOL_GPL(sha256_is_arch_optimized); - static int __init sha256_x86_mod_init(void) { if (boot_cpu_has(X86_FEATURE_SHA_NI)) { static_call_update(sha256_blocks_x86, sha256_ni_transform); } else if (cpu_has_xfeatures(XFEATURE_MASK_SSE | --=20 2.50.0 From nobody Sun Sep 7 11:34:13 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CF8442D979F; Mon, 30 Jun 2025 16:09:26 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751299767; cv=none; b=uJJH2l99SGLU2hYDd/OJQBQtlITW1mJzwVs3vamP2dSUj060cAZHQbDHOB7V+3X9Q/STO673CSeWVYBRqg8jIqmRC8trLIVYM/C8DTNN4QTiC4KqDvXRqwnBmMKAAxAaTNNf2IehqRGdeCvzjZmgFYHqizkMBGbVIGQwu7uS/UQ= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751299767; c=relaxed/simple; bh=QiYyLutagD8A5z6IJuPXgJxVQLKoG/45Zf+JXR4NhJI=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=OF8d/RPLydW6WokxUDeATBM96VGr+zMTF+BaFBmSdlkALeeG8wwx929rVRaLxcY8bLCIUT7HDA+PW1p2zHkp/K+f35PDDbSSl/9q0LdsLJMe13lsb8UwfX/pjDTUa0xRY7aq9roH7FdLgXd/URn/fR6FTUbDhmqrfPGCUUwTCEY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=oCX/wVQE; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="oCX/wVQE" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 55D48C4CEF1; Mon, 30 Jun 2025 16:09:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1751299766; bh=QiYyLutagD8A5z6IJuPXgJxVQLKoG/45Zf+JXR4NhJI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=oCX/wVQEI9G/g7jkSqgfRIbh5coTDmzCNlKYIlheOCCgD0lDQ40pwvjbMQNR6NdvO nQXR15VaDBjSPGDm/2uG0J8t7VqicNpBejNS6Yn+4Vc4pt5oLfNOed3wxZTai11Gqc H2kh6a7Xc+aItvguXmeL/wiUBPoxgmK/XtgIUL27l5SSURJxgeAuK4FKi8Z/O4/zFk E1cyjYzFCZ7qg7NU0lKr0+cXuYDDPSDLmuqW1lHJvx0XKIWYbsL31MS/A0NWWzHcW8 GM3FcuiQJ6uklz3byOfH5k33U+wY7+RHmlOTDSYVy/zzNs11F5jxXvS2wErYdIApn4 ExyAvQdaQGZHw== From: Eric Biggers To: linux-crypto@vger.kernel.org Cc: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, sparclinux@vger.kernel.org, x86@kernel.org, Ard Biesheuvel , "Jason A . Donenfeld" , Eric Biggers Subject: [PATCH v2 12/14] lib/crypto: sha256: Consolidate into single module Date: Mon, 30 Jun 2025 09:06:43 -0700 Message-ID: <20250630160645.3198-13-ebiggers@kernel.org> X-Mailer: git-send-email 2.50.0 In-Reply-To: <20250630160645.3198-1-ebiggers@kernel.org> References: <20250630160645.3198-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Consolidate the CPU-based SHA-256 code into a single module, following what I did with SHA-512: - Each arch now provides a header file lib/crypto/$(SRCARCH)/sha256.h, replacing lib/crypto/$(SRCARCH)/sha256.c. The header defines sha256_blocks() and optionally sha256_mod_init_arch(). It is included by lib/crypto/sha256.c, and thus the code gets built into the single libsha256 module, with proper inlining and dead code elimination. - sha256_blocks_generic() is moved from lib/crypto/sha256-generic.c into lib/crypto/sha256.c. It's now a static function marked with __maybe_unused, so the compiler automatically eliminates it in any cases where it's not used. - Whether arch-optimized SHA-256 is buildable is now controlled centrally by lib/crypto/Kconfig instead of by lib/crypto/$(SRCARCH)/Kconfig. The conditions for enabling it remain the same as before, and it remains enabled by default. - Any additional arch-specific translation units for the optimized SHA-256 code (such as assembly files) are now compiled by lib/crypto/Makefile instead of lib/crypto/$(SRCARCH)/Makefile. Signed-off-by: Eric Biggers Acked-by: Ard Biesheuvel --- arch/mips/cavium-octeon/Kconfig | 6 - arch/mips/cavium-octeon/crypto/Makefile | 1 - include/crypto/internal/sha2.h | 52 ------ lib/crypto/Kconfig | 26 ++- lib/crypto/Makefile | 39 ++++- lib/crypto/arm/Kconfig | 6 - lib/crypto/arm/Makefile | 8 +- lib/crypto/arm/{sha256.c =3D> sha256.h} | 27 +--- lib/crypto/arm64/Kconfig | 5 - lib/crypto/arm64/Makefile | 9 +- lib/crypto/arm64/{sha256.c =3D> sha256.h} | 29 ++-- .../crypto/mips/sha256.h | 14 +- lib/crypto/powerpc/Kconfig | 6 - lib/crypto/powerpc/Makefile | 3 - lib/crypto/powerpc/{sha256.c =3D> sha256.h} | 13 +- lib/crypto/riscv/Kconfig | 7 - lib/crypto/riscv/Makefile | 3 - lib/crypto/riscv/{sha256.c =3D> sha256.h} | 24 +-- lib/crypto/s390/Kconfig | 6 - lib/crypto/s390/Makefile | 3 - lib/crypto/s390/{sha256.c =3D> sha256.h} | 23 +-- lib/crypto/sha256-generic.c | 150 ------------------ lib/crypto/sha256.c | 146 +++++++++++++++-- lib/crypto/sparc/Kconfig | 8 - lib/crypto/sparc/Makefile | 4 - lib/crypto/sparc/{sha256.c =3D> sha256.h} | 29 +--- lib/crypto/x86/Kconfig | 7 - lib/crypto/x86/Makefile | 3 - lib/crypto/x86/{sha256.c =3D> sha256.h} | 29 +--- 29 files changed, 226 insertions(+), 460 deletions(-) delete mode 100644 include/crypto/internal/sha2.h rename lib/crypto/arm/{sha256.c =3D> sha256.h} (64%) rename lib/crypto/arm64/{sha256.c =3D> sha256.h} (67%) rename arch/mips/cavium-octeon/crypto/octeon-sha256.c =3D> lib/crypto/mips= /sha256.h (80%) rename lib/crypto/powerpc/{sha256.c =3D> sha256.h} (80%) rename lib/crypto/riscv/{sha256.c =3D> sha256.h} (63%) rename lib/crypto/s390/{sha256.c =3D> sha256.h} (50%) delete mode 100644 lib/crypto/sha256-generic.c delete mode 100644 lib/crypto/sparc/Kconfig delete mode 100644 lib/crypto/sparc/Makefile rename lib/crypto/sparc/{sha256.c =3D> sha256.h} (62%) rename lib/crypto/x86/{sha256.c =3D> sha256.h} (70%) diff --git a/arch/mips/cavium-octeon/Kconfig b/arch/mips/cavium-octeon/Kcon= fig index 11f4aa6e80e9b..450e979ef5d93 100644 --- a/arch/mips/cavium-octeon/Kconfig +++ b/arch/mips/cavium-octeon/Kconfig @@ -21,16 +21,10 @@ config CAVIUM_OCTEON_CVMSEG_SIZE local memory; the larger CVMSEG is, the smaller the cache is. This selects the size of CVMSEG LM, which is in cache blocks. The legally range is from zero to 54 cache blocks (i.e. CVMSEG LM is between zero and 6192 bytes). =20 -config CRYPTO_SHA256_OCTEON - tristate - default CRYPTO_LIB_SHA256 - select CRYPTO_ARCH_HAVE_LIB_SHA256 - select CRYPTO_LIB_SHA256_GENERIC - endif # CPU_CAVIUM_OCTEON =20 if CAVIUM_OCTEON_SOC =20 config CAVIUM_OCTEON_LOCK_L2 diff --git a/arch/mips/cavium-octeon/crypto/Makefile b/arch/mips/cavium-oct= eon/crypto/Makefile index 168b19ef7ce89..db428e4b30bce 100644 --- a/arch/mips/cavium-octeon/crypto/Makefile +++ b/arch/mips/cavium-octeon/crypto/Makefile @@ -5,6 +5,5 @@ =20 obj-y +=3D octeon-crypto.o =20 obj-$(CONFIG_CRYPTO_MD5_OCTEON) +=3D octeon-md5.o obj-$(CONFIG_CRYPTO_SHA1_OCTEON) +=3D octeon-sha1.o -obj-$(CONFIG_CRYPTO_SHA256_OCTEON) +=3D octeon-sha256.o diff --git a/include/crypto/internal/sha2.h b/include/crypto/internal/sha2.h deleted file mode 100644 index 7915a3a46bc86..0000000000000 --- a/include/crypto/internal/sha2.h +++ /dev/null @@ -1,52 +0,0 @@ -/* SPDX-License-Identifier: GPL-2.0-only */ - -#ifndef _CRYPTO_INTERNAL_SHA2_H -#define _CRYPTO_INTERNAL_SHA2_H - -#include -#include -#include -#include -#include - -void sha256_blocks_generic(struct sha256_block_state *state, - const u8 *data, size_t nblocks); -void sha256_blocks_arch(struct sha256_block_state *state, - const u8 *data, size_t nblocks); - -static __always_inline void sha256_choose_blocks( - u32 state[SHA256_STATE_WORDS], const u8 *data, size_t nblocks, - bool force_generic, bool force_simd) -{ - if (!IS_ENABLED(CONFIG_CRYPTO_ARCH_HAVE_LIB_SHA256) || force_generic) - sha256_blocks_generic((struct sha256_block_state *)state, data, nblocks); - else - sha256_blocks_arch((struct sha256_block_state *)state, data, nblocks); -} - -static __always_inline void sha256_finup( - struct crypto_sha256_state *sctx, u8 buf[SHA256_BLOCK_SIZE], - size_t len, u8 out[SHA256_DIGEST_SIZE], size_t digest_size, - bool force_generic, bool force_simd) -{ - const size_t bit_offset =3D SHA256_BLOCK_SIZE - 8; - __be64 *bits =3D (__be64 *)&buf[bit_offset]; - int i; - - buf[len++] =3D 0x80; - if (len > bit_offset) { - memset(&buf[len], 0, SHA256_BLOCK_SIZE - len); - sha256_choose_blocks(sctx->state, buf, 1, force_generic, - force_simd); - len =3D 0; - } - - memset(&buf[len], 0, bit_offset - len); - *bits =3D cpu_to_be64(sctx->count << 3); - sha256_choose_blocks(sctx->state, buf, 1, force_generic, force_simd); - - for (i =3D 0; i < digest_size; i +=3D 4) - put_unaligned_be32(sctx->state[i / 4], out + i); -} - -#endif /* _CRYPTO_INTERNAL_SHA2_H */ diff --git a/lib/crypto/Kconfig b/lib/crypto/Kconfig index 9bd740475a898..3305c69085816 100644 --- a/lib/crypto/Kconfig +++ b/lib/crypto/Kconfig @@ -142,24 +142,21 @@ config CRYPTO_LIB_SHA256 help Enable the SHA-256 library interface. This interface may be fulfilled by either the generic implementation or an arch-specific one, if one is available and enabled. =20 -config CRYPTO_ARCH_HAVE_LIB_SHA256 +config CRYPTO_LIB_SHA256_ARCH bool - help - Declares whether the architecture provides an arch-specific - accelerated implementation of the SHA-256 library interface. - -config CRYPTO_LIB_SHA256_GENERIC - tristate - default CRYPTO_LIB_SHA256 if !CRYPTO_ARCH_HAVE_LIB_SHA256 - help - This symbol can be selected by arch implementations of the SHA-256 - library interface that require the generic code as a fallback, e.g., - for SIMD implementations. If no arch specific implementation is - enabled, this implementation serves the users of CRYPTO_LIB_SHA256. + depends on CRYPTO_LIB_SHA256 && !UML + default y if ARM && !CPU_V7M + default y if ARM64 + default y if MIPS && CPU_CAVIUM_OCTEON + default y if PPC && SPE + default y if RISCV && 64BIT && RISCV_ISA_V && TOOLCHAIN_HAS_VECTOR_CRYPTO + default y if S390 + default y if SPARC64 + default y if X86_64 =20 config CRYPTO_LIB_SHA512 tristate help The SHA-384, SHA-512, HMAC-SHA384, and HMAC-SHA512 library functions. @@ -197,13 +194,10 @@ if RISCV source "lib/crypto/riscv/Kconfig" endif if S390 source "lib/crypto/s390/Kconfig" endif -if SPARC -source "lib/crypto/sparc/Kconfig" -endif if X86 source "lib/crypto/x86/Kconfig" endif endif =20 diff --git a/lib/crypto/Makefile b/lib/crypto/Makefile index 5823137fa5a8c..a887bf103bf05 100644 --- a/lib/crypto/Makefile +++ b/lib/crypto/Makefile @@ -64,15 +64,43 @@ libpoly1305-generic-$(CONFIG_ARCH_SUPPORTS_INT128) :=3D= poly1305-donna64.o libpoly1305-generic-y +=3D poly1305-generic.o =20 obj-$(CONFIG_CRYPTO_LIB_SHA1) +=3D libsha1.o libsha1-y :=3D sha1.o =20 -obj-$(CONFIG_CRYPTO_LIB_SHA256) +=3D libsha256.o -libsha256-y :=3D sha256.o +##########################################################################= ###### =20 -obj-$(CONFIG_CRYPTO_LIB_SHA256_GENERIC) +=3D libsha256-generic.o -libsha256-generic-y :=3D sha256-generic.o +obj-$(CONFIG_CRYPTO_LIB_SHA256) +=3D libsha256.o +libsha256-y :=3D sha256.o +ifeq ($(CONFIG_CRYPTO_LIB_SHA256_ARCH),y) +CFLAGS_sha256.o +=3D -I$(src)/$(SRCARCH) + +ifeq ($(CONFIG_ARM),y) +libsha256-y +=3D arm/sha256-ce.o arm/sha256-core.o +$(obj)/arm/sha256-core.S: $(src)/arm/sha256-armv4.pl + $(call cmd,perlasm) +clean-files +=3D arm/sha256-core.S +AFLAGS_arm/sha256-core.o +=3D $(aflags-thumb2-y) +endif + +ifeq ($(CONFIG_ARM64),y) +libsha256-y +=3D arm64/sha256-core.o +$(obj)/arm64/sha256-core.S: $(src)/arm64/sha2-armv8.pl + $(call cmd,perlasm_with_args) +clean-files +=3D arm64/sha256-core.S +libsha256-$(CONFIG_KERNEL_MODE_NEON) +=3D arm64/sha256-ce.o +endif + +libsha256-$(CONFIG_PPC) +=3D powerpc/sha256-spe-asm.o +libsha256-$(CONFIG_RISCV) +=3D riscv/sha256-riscv64-zvknha_or_zvknhb-zvkb.o +libsha256-$(CONFIG_SPARC) +=3D sparc/sha256_asm.o +libsha256-$(CONFIG_X86) +=3D x86/sha256-ssse3-asm.o \ + x86/sha256-avx-asm.o \ + x86/sha256-avx2-asm.o \ + x86/sha256-ni-asm.o +endif # CONFIG_CRYPTO_LIB_SHA256_ARCH + +##########################################################################= ###### =20 obj-$(CONFIG_CRYPTO_LIB_SHA512) +=3D libsha512.o libsha512-y :=3D sha512.o ifeq ($(CONFIG_CRYPTO_LIB_SHA512_ARCH),y) CFLAGS_sha512.o +=3D -I$(src)/$(SRCARCH) @@ -98,10 +126,12 @@ libsha512-$(CONFIG_SPARC) +=3D sparc/sha512_asm.o libsha512-$(CONFIG_X86) +=3D x86/sha512-ssse3-asm.o \ x86/sha512-avx-asm.o \ x86/sha512-avx2-asm.o endif # CONFIG_CRYPTO_LIB_SHA512_ARCH =20 +##########################################################################= ###### + obj-$(CONFIG_MPILIB) +=3D mpi/ =20 obj-$(CONFIG_CRYPTO_SELFTESTS_FULL) +=3D simd.o =20 obj-$(CONFIG_CRYPTO_LIB_SM3) +=3D libsm3.o @@ -111,7 +141,6 @@ obj-$(CONFIG_ARM) +=3D arm/ obj-$(CONFIG_ARM64) +=3D arm64/ obj-$(CONFIG_MIPS) +=3D mips/ obj-$(CONFIG_PPC) +=3D powerpc/ obj-$(CONFIG_RISCV) +=3D riscv/ obj-$(CONFIG_S390) +=3D s390/ -obj-$(CONFIG_SPARC) +=3D sparc/ obj-$(CONFIG_X86) +=3D x86/ diff --git a/lib/crypto/arm/Kconfig b/lib/crypto/arm/Kconfig index 9f3ff30f40328..e8444fd0aae30 100644 --- a/lib/crypto/arm/Kconfig +++ b/lib/crypto/arm/Kconfig @@ -20,11 +20,5 @@ config CRYPTO_CHACHA20_NEON =20 config CRYPTO_POLY1305_ARM tristate default CRYPTO_LIB_POLY1305 select CRYPTO_ARCH_HAVE_LIB_POLY1305 - -config CRYPTO_SHA256_ARM - tristate - depends on !CPU_V7M - default CRYPTO_LIB_SHA256 - select CRYPTO_ARCH_HAVE_LIB_SHA256 diff --git a/lib/crypto/arm/Makefile b/lib/crypto/arm/Makefile index 431f77c3ff6fd..4c042a4c77ed6 100644 --- a/lib/crypto/arm/Makefile +++ b/lib/crypto/arm/Makefile @@ -8,25 +8,19 @@ chacha-neon-y :=3D chacha-scalar-core.o chacha-glue.o chacha-neon-$(CONFIG_KERNEL_MODE_NEON) +=3D chacha-neon-core.o =20 obj-$(CONFIG_CRYPTO_POLY1305_ARM) +=3D poly1305-arm.o poly1305-arm-y :=3D poly1305-core.o poly1305-glue.o =20 -obj-$(CONFIG_CRYPTO_SHA256_ARM) +=3D sha256-arm.o -sha256-arm-y :=3D sha256.o sha256-core.o -sha256-arm-$(CONFIG_KERNEL_MODE_NEON) +=3D sha256-ce.o - quiet_cmd_perl =3D PERL $@ cmd_perl =3D $(PERL) $(<) > $(@) =20 $(obj)/%-core.S: $(src)/%-armv4.pl $(call cmd,perl) =20 -clean-files +=3D poly1305-core.S sha256-core.S +clean-files +=3D poly1305-core.S =20 aflags-thumb2-$(CONFIG_THUMB2_KERNEL) :=3D -U__thumb2__ -D__thumb2__=3D1 =20 # massage the perlasm code a bit so we only get the NEON routine if we nee= d it poly1305-aflags-$(CONFIG_CPU_V7) :=3D -U__LINUX_ARM_ARCH__ -D__LINUX_ARM_A= RCH__=3D5 poly1305-aflags-$(CONFIG_KERNEL_MODE_NEON) :=3D -U__LINUX_ARM_ARCH__ -D__L= INUX_ARM_ARCH__=3D7 AFLAGS_poly1305-core.o +=3D $(poly1305-aflags-y) $(aflags-thumb2-y) - -AFLAGS_sha256-core.o +=3D $(aflags-thumb2-y) diff --git a/lib/crypto/arm/sha256.c b/lib/crypto/arm/sha256.h similarity index 64% rename from lib/crypto/arm/sha256.c rename to lib/crypto/arm/sha256.h index 27181be0aa92e..da75cbdc51d41 100644 --- a/lib/crypto/arm/sha256.c +++ b/lib/crypto/arm/sha256.h @@ -1,16 +1,13 @@ -// SPDX-License-Identifier: GPL-2.0-or-later +/* SPDX-License-Identifier: GPL-2.0-or-later */ /* * SHA-256 optimized for ARM * * Copyright 2025 Google LLC */ #include -#include #include -#include -#include =20 asmlinkage void sha256_block_data_order(struct sha256_block_state *state, const u8 *data, size_t nblocks); asmlinkage void sha256_block_data_order_neon(struct sha256_block_state *st= ate, const u8 *data, size_t nblocks); @@ -18,12 +15,12 @@ asmlinkage void sha256_ce_transform(struct sha256_block= _state *state, const u8 *data, size_t nblocks); =20 static __ro_after_init DEFINE_STATIC_KEY_FALSE(have_neon); static __ro_after_init DEFINE_STATIC_KEY_FALSE(have_ce); =20 -void sha256_blocks_arch(struct sha256_block_state *state, - const u8 *data, size_t nblocks) +static void sha256_blocks(struct sha256_block_state *state, + const u8 *data, size_t nblocks) { if (IS_ENABLED(CONFIG_KERNEL_MODE_NEON) && static_branch_likely(&have_neon) && crypto_simd_usable()) { kernel_neon_begin(); if (static_branch_likely(&have_ce)) @@ -33,25 +30,17 @@ void sha256_blocks_arch(struct sha256_block_state *stat= e, kernel_neon_end(); } else { sha256_block_data_order(state, data, nblocks); } } -EXPORT_SYMBOL_GPL(sha256_blocks_arch); =20 -static int __init sha256_arm_mod_init(void) +#ifdef CONFIG_KERNEL_MODE_NEON +#define sha256_mod_init_arch sha256_mod_init_arch +static inline void sha256_mod_init_arch(void) { - if (IS_ENABLED(CONFIG_KERNEL_MODE_NEON) && (elf_hwcap & HWCAP_NEON)) { + if (elf_hwcap & HWCAP_NEON) { static_branch_enable(&have_neon); if (elf_hwcap2 & HWCAP2_SHA2) static_branch_enable(&have_ce); } - return 0; } -subsys_initcall(sha256_arm_mod_init); - -static void __exit sha256_arm_mod_exit(void) -{ -} -module_exit(sha256_arm_mod_exit); - -MODULE_LICENSE("GPL"); -MODULE_DESCRIPTION("SHA-256 optimized for ARM"); +#endif /* CONFIG_KERNEL_MODE_NEON */ diff --git a/lib/crypto/arm64/Kconfig b/lib/crypto/arm64/Kconfig index 49e57bfdb5b52..0b903ef524d85 100644 --- a/lib/crypto/arm64/Kconfig +++ b/lib/crypto/arm64/Kconfig @@ -10,10 +10,5 @@ config CRYPTO_CHACHA20_NEON config CRYPTO_POLY1305_NEON tristate depends on KERNEL_MODE_NEON default CRYPTO_LIB_POLY1305 select CRYPTO_ARCH_HAVE_LIB_POLY1305 - -config CRYPTO_SHA256_ARM64 - tristate - default CRYPTO_LIB_SHA256 - select CRYPTO_ARCH_HAVE_LIB_SHA256 diff --git a/lib/crypto/arm64/Makefile b/lib/crypto/arm64/Makefile index 946c099037117..6207088397a73 100644 --- a/lib/crypto/arm64/Makefile +++ b/lib/crypto/arm64/Makefile @@ -6,19 +6,12 @@ chacha-neon-y :=3D chacha-neon-core.o chacha-neon-glue.o obj-$(CONFIG_CRYPTO_POLY1305_NEON) +=3D poly1305-neon.o poly1305-neon-y :=3D poly1305-core.o poly1305-glue.o AFLAGS_poly1305-core.o +=3D -Dpoly1305_init=3Dpoly1305_block_init_arch AFLAGS_poly1305-core.o +=3D -Dpoly1305_emit=3Dpoly1305_emit_arch =20 -obj-$(CONFIG_CRYPTO_SHA256_ARM64) +=3D sha256-arm64.o -sha256-arm64-y :=3D sha256.o sha256-core.o -sha256-arm64-$(CONFIG_KERNEL_MODE_NEON) +=3D sha256-ce.o - quiet_cmd_perlasm =3D PERLASM $@ cmd_perlasm =3D $(PERL) $(<) void $(@) =20 $(obj)/%-core.S: $(src)/%-armv8.pl $(call cmd,perlasm) =20 -$(obj)/sha256-core.S: $(src)/sha2-armv8.pl - $(call cmd,perlasm) - -clean-files +=3D poly1305-core.S sha256-core.S +clean-files +=3D poly1305-core.S diff --git a/lib/crypto/arm64/sha256.c b/lib/crypto/arm64/sha256.h similarity index 67% rename from lib/crypto/arm64/sha256.c rename to lib/crypto/arm64/sha256.h index a5a4982767089..a211966c124a9 100644 --- a/lib/crypto/arm64/sha256.c +++ b/lib/crypto/arm64/sha256.h @@ -1,16 +1,14 @@ -// SPDX-License-Identifier: GPL-2.0-or-later +/* SPDX-License-Identifier: GPL-2.0-or-later */ /* * SHA-256 optimized for ARM64 * * Copyright 2025 Google LLC */ #include -#include #include -#include -#include +#include =20 asmlinkage void sha256_block_data_order(struct sha256_block_state *state, const u8 *data, size_t nblocks); asmlinkage void sha256_block_neon(struct sha256_block_state *state, const u8 *data, size_t nblocks); @@ -18,12 +16,12 @@ asmlinkage size_t __sha256_ce_transform(struct sha256_b= lock_state *state, const u8 *data, size_t nblocks); =20 static __ro_after_init DEFINE_STATIC_KEY_FALSE(have_neon); static __ro_after_init DEFINE_STATIC_KEY_FALSE(have_ce); =20 -void sha256_blocks_arch(struct sha256_block_state *state, - const u8 *data, size_t nblocks) +static void sha256_blocks(struct sha256_block_state *state, + const u8 *data, size_t nblocks) { if (IS_ENABLED(CONFIG_KERNEL_MODE_NEON) && static_branch_likely(&have_neon) && crypto_simd_usable()) { if (static_branch_likely(&have_ce)) { do { @@ -43,26 +41,17 @@ void sha256_blocks_arch(struct sha256_block_state *stat= e, } } else { sha256_block_data_order(state, data, nblocks); } } -EXPORT_SYMBOL_GPL(sha256_blocks_arch); =20 -static int __init sha256_arm64_mod_init(void) +#ifdef CONFIG_KERNEL_MODE_NEON +#define sha256_mod_init_arch sha256_mod_init_arch +static inline void sha256_mod_init_arch(void) { - if (IS_ENABLED(CONFIG_KERNEL_MODE_NEON) && - cpu_have_named_feature(ASIMD)) { + if (cpu_have_named_feature(ASIMD)) { static_branch_enable(&have_neon); if (cpu_have_named_feature(SHA2)) static_branch_enable(&have_ce); } - return 0; } -subsys_initcall(sha256_arm64_mod_init); - -static void __exit sha256_arm64_mod_exit(void) -{ -} -module_exit(sha256_arm64_mod_exit); - -MODULE_LICENSE("GPL"); -MODULE_DESCRIPTION("SHA-256 optimized for ARM64"); +#endif /* CONFIG_KERNEL_MODE_NEON */ diff --git a/arch/mips/cavium-octeon/crypto/octeon-sha256.c b/lib/crypto/mi= ps/sha256.h similarity index 80% rename from arch/mips/cavium-octeon/crypto/octeon-sha256.c rename to lib/crypto/mips/sha256.h index c7c67bdc2bd06..ccccfd131634b 100644 --- a/arch/mips/cavium-octeon/crypto/octeon-sha256.c +++ b/lib/crypto/mips/sha256.h @@ -1,6 +1,6 @@ -// SPDX-License-Identifier: GPL-2.0-or-later +/* SPDX-License-Identifier: GPL-2.0-or-later */ /* * SHA-256 Secure Hash Algorithm. * * Adapted for OCTEON by Aaro Koskinen . * @@ -12,20 +12,17 @@ * SHA224 Support Copyright 2007 Intel Corporation */ =20 #include #include -#include -#include -#include =20 /* * We pass everything as 64-bit. OCTEON can handle misaligned data. */ =20 -void sha256_blocks_arch(struct sha256_block_state *state, - const u8 *data, size_t nblocks) +static void sha256_blocks(struct sha256_block_state *state, + const u8 *data, size_t nblocks) { struct octeon_cop2_state cop2_state; u64 *state64 =3D (u64 *)state; unsigned long flags; =20 @@ -57,10 +54,5 @@ void sha256_blocks_arch(struct sha256_block_state *state, state64[1] =3D read_octeon_64bit_hash_dword(1); state64[2] =3D read_octeon_64bit_hash_dword(2); state64[3] =3D read_octeon_64bit_hash_dword(3); octeon_crypto_disable(&cop2_state, flags); } -EXPORT_SYMBOL_GPL(sha256_blocks_arch); - -MODULE_LICENSE("GPL"); -MODULE_DESCRIPTION("SHA-256 Secure Hash Algorithm (OCTEON)"); -MODULE_AUTHOR("Aaro Koskinen "); diff --git a/lib/crypto/powerpc/Kconfig b/lib/crypto/powerpc/Kconfig index 3f9e1bbd9905b..2eaeb7665a6a0 100644 --- a/lib/crypto/powerpc/Kconfig +++ b/lib/crypto/powerpc/Kconfig @@ -12,11 +12,5 @@ config CRYPTO_POLY1305_P10 depends on PPC64 && CPU_LITTLE_ENDIAN && VSX depends on BROKEN # Needs to be fixed to work in softirq context default CRYPTO_LIB_POLY1305 select CRYPTO_ARCH_HAVE_LIB_POLY1305 select CRYPTO_LIB_POLY1305_GENERIC - -config CRYPTO_SHA256_PPC_SPE - tristate - depends on SPE - default CRYPTO_LIB_SHA256 - select CRYPTO_ARCH_HAVE_LIB_SHA256 diff --git a/lib/crypto/powerpc/Makefile b/lib/crypto/powerpc/Makefile index 27f231f8e334a..5709ae14258a0 100644 --- a/lib/crypto/powerpc/Makefile +++ b/lib/crypto/powerpc/Makefile @@ -3,8 +3,5 @@ obj-$(CONFIG_CRYPTO_CHACHA20_P10) +=3D chacha-p10-crypto.o chacha-p10-crypto-y :=3D chacha-p10-glue.o chacha-p10le-8x.o =20 obj-$(CONFIG_CRYPTO_POLY1305_P10) +=3D poly1305-p10-crypto.o poly1305-p10-crypto-y :=3D poly1305-p10-glue.o poly1305-p10le_64.o - -obj-$(CONFIG_CRYPTO_SHA256_PPC_SPE) +=3D sha256-ppc-spe.o -sha256-ppc-spe-y :=3D sha256.o sha256-spe-asm.o diff --git a/lib/crypto/powerpc/sha256.c b/lib/crypto/powerpc/sha256.h similarity index 80% rename from lib/crypto/powerpc/sha256.c rename to lib/crypto/powerpc/sha256.h index 437e587b05754..d923698de5745 100644 --- a/lib/crypto/powerpc/sha256.c +++ b/lib/crypto/powerpc/sha256.h @@ -1,19 +1,16 @@ -// SPDX-License-Identifier: GPL-2.0-or-later +/* SPDX-License-Identifier: GPL-2.0-or-later */ /* * SHA-256 Secure Hash Algorithm, SPE optimized * * Based on generic implementation. The assembler module takes care * about the SPE registers so it can run from interrupt context. * * Copyright (c) 2015 Markus Stockhausen */ =20 #include -#include -#include -#include #include =20 /* * MAX_BYTES defines the number of bytes that are allowed to be processed * between preempt_disable() and preempt_enable(). SHA256 takes ~2,000 @@ -40,12 +37,12 @@ static void spe_end(void) disable_kernel_spe(); /* reenable preemption */ preempt_enable(); } =20 -void sha256_blocks_arch(struct sha256_block_state *state, - const u8 *data, size_t nblocks) +static void sha256_blocks(struct sha256_block_state *state, + const u8 *data, size_t nblocks) { do { /* cut input data into smaller blocks */ u32 unit =3D min_t(size_t, nblocks, MAX_BYTES / SHA256_BLOCK_SIZE); @@ -56,9 +53,5 @@ void sha256_blocks_arch(struct sha256_block_state *state, =20 data +=3D unit * SHA256_BLOCK_SIZE; nblocks -=3D unit; } while (nblocks); } -EXPORT_SYMBOL_GPL(sha256_blocks_arch); - -MODULE_LICENSE("GPL"); -MODULE_DESCRIPTION("SHA-256 Secure Hash Algorithm, SPE optimized"); diff --git a/lib/crypto/riscv/Kconfig b/lib/crypto/riscv/Kconfig index c100571feb7e8..bc7a43f33eb3a 100644 --- a/lib/crypto/riscv/Kconfig +++ b/lib/crypto/riscv/Kconfig @@ -4,12 +4,5 @@ config CRYPTO_CHACHA_RISCV64 tristate depends on 64BIT && RISCV_ISA_V && TOOLCHAIN_HAS_VECTOR_CRYPTO default CRYPTO_LIB_CHACHA select CRYPTO_ARCH_HAVE_LIB_CHACHA select CRYPTO_LIB_CHACHA_GENERIC - -config CRYPTO_SHA256_RISCV64 - tristate - depends on 64BIT && RISCV_ISA_V && TOOLCHAIN_HAS_VECTOR_CRYPTO - default CRYPTO_LIB_SHA256 - select CRYPTO_ARCH_HAVE_LIB_SHA256 - select CRYPTO_LIB_SHA256_GENERIC diff --git a/lib/crypto/riscv/Makefile b/lib/crypto/riscv/Makefile index b7cb877a2c07e..e27b78f317fc8 100644 --- a/lib/crypto/riscv/Makefile +++ b/lib/crypto/riscv/Makefile @@ -1,7 +1,4 @@ # SPDX-License-Identifier: GPL-2.0-only =20 obj-$(CONFIG_CRYPTO_CHACHA_RISCV64) +=3D chacha-riscv64.o chacha-riscv64-y :=3D chacha-riscv64-glue.o chacha-riscv64-zvkb.o - -obj-$(CONFIG_CRYPTO_SHA256_RISCV64) +=3D sha256-riscv64.o -sha256-riscv64-y :=3D sha256.o sha256-riscv64-zvknha_or_zvknhb-zvkb.o diff --git a/lib/crypto/riscv/sha256.c b/lib/crypto/riscv/sha256.h similarity index 63% rename from lib/crypto/riscv/sha256.c rename to lib/crypto/riscv/sha256.h index 01004cb9c6e9e..c0f79c18f1199 100644 --- a/lib/crypto/riscv/sha256.c +++ b/lib/crypto/riscv/sha256.h @@ -1,6 +1,6 @@ -// SPDX-License-Identifier: GPL-2.0-or-later +/* SPDX-License-Identifier: GPL-2.0-or-later */ /* * SHA-256 (RISC-V accelerated) * * Copyright (C) 2022 VRULL GmbH * Author: Heiko Stuebner @@ -8,49 +8,35 @@ * Copyright (C) 2023 SiFive, Inc. * Author: Jerry Shih */ =20 #include -#include #include -#include -#include =20 asmlinkage void sha256_transform_zvknha_or_zvknhb_zvkb(struct sha256_block_state *state, const u8 *data, size_t nblocks); =20 static __ro_after_init DEFINE_STATIC_KEY_FALSE(have_extensions); =20 -void sha256_blocks_arch(struct sha256_block_state *state, - const u8 *data, size_t nblocks) +static void sha256_blocks(struct sha256_block_state *state, + const u8 *data, size_t nblocks) { if (static_branch_likely(&have_extensions) && crypto_simd_usable()) { kernel_vector_begin(); sha256_transform_zvknha_or_zvknhb_zvkb(state, data, nblocks); kernel_vector_end(); } else { sha256_blocks_generic(state, data, nblocks); } } -EXPORT_SYMBOL_GPL(sha256_blocks_arch); =20 -static int __init riscv64_sha256_mod_init(void) +#define sha256_mod_init_arch sha256_mod_init_arch +static inline void sha256_mod_init_arch(void) { /* Both zvknha and zvknhb provide the SHA-256 instructions. */ if ((riscv_isa_extension_available(NULL, ZVKNHA) || riscv_isa_extension_available(NULL, ZVKNHB)) && riscv_isa_extension_available(NULL, ZVKB) && riscv_vector_vlen() >=3D 128) static_branch_enable(&have_extensions); - return 0; } -subsys_initcall(riscv64_sha256_mod_init); - -static void __exit riscv64_sha256_mod_exit(void) -{ -} -module_exit(riscv64_sha256_mod_exit); - -MODULE_DESCRIPTION("SHA-256 (RISC-V accelerated)"); -MODULE_AUTHOR("Heiko Stuebner "); -MODULE_LICENSE("GPL"); diff --git a/lib/crypto/s390/Kconfig b/lib/crypto/s390/Kconfig index e3f855ef43934..069b355fe51aa 100644 --- a/lib/crypto/s390/Kconfig +++ b/lib/crypto/s390/Kconfig @@ -3,11 +3,5 @@ config CRYPTO_CHACHA_S390 tristate default CRYPTO_LIB_CHACHA select CRYPTO_LIB_CHACHA_GENERIC select CRYPTO_ARCH_HAVE_LIB_CHACHA - -config CRYPTO_SHA256_S390 - tristate - default CRYPTO_LIB_SHA256 - select CRYPTO_ARCH_HAVE_LIB_SHA256 - select CRYPTO_LIB_SHA256_GENERIC diff --git a/lib/crypto/s390/Makefile b/lib/crypto/s390/Makefile index 5df30f1e79307..06c2cf77178ef 100644 --- a/lib/crypto/s390/Makefile +++ b/lib/crypto/s390/Makefile @@ -1,7 +1,4 @@ # SPDX-License-Identifier: GPL-2.0-only =20 obj-$(CONFIG_CRYPTO_CHACHA_S390) +=3D chacha_s390.o chacha_s390-y :=3D chacha-glue.o chacha-s390.o - -obj-$(CONFIG_CRYPTO_SHA256_S390) +=3D sha256-s390.o -sha256-s390-y :=3D sha256.o diff --git a/lib/crypto/s390/sha256.c b/lib/crypto/s390/sha256.h similarity index 50% rename from lib/crypto/s390/sha256.c rename to lib/crypto/s390/sha256.h index 6ebfd35a5d44c..70a81cbc06b2c 100644 --- a/lib/crypto/s390/sha256.c +++ b/lib/crypto/s390/sha256.h @@ -1,41 +1,28 @@ -// SPDX-License-Identifier: GPL-2.0-or-later +/* SPDX-License-Identifier: GPL-2.0-or-later */ /* * SHA-256 optimized using the CP Assist for Cryptographic Functions (CPAC= F) * * Copyright 2025 Google LLC */ #include -#include #include -#include -#include =20 static __ro_after_init DEFINE_STATIC_KEY_FALSE(have_cpacf_sha256); =20 -void sha256_blocks_arch(struct sha256_block_state *state, - const u8 *data, size_t nblocks) +static void sha256_blocks(struct sha256_block_state *state, + const u8 *data, size_t nblocks) { if (static_branch_likely(&have_cpacf_sha256)) cpacf_kimd(CPACF_KIMD_SHA_256, state, data, nblocks * SHA256_BLOCK_SIZE); else sha256_blocks_generic(state, data, nblocks); } -EXPORT_SYMBOL_GPL(sha256_blocks_arch); =20 -static int __init sha256_s390_mod_init(void) +#define sha256_mod_init_arch sha256_mod_init_arch +static inline void sha256_mod_init_arch(void) { if (cpu_have_feature(S390_CPU_FEATURE_MSA) && cpacf_query_func(CPACF_KIMD, CPACF_KIMD_SHA_256)) static_branch_enable(&have_cpacf_sha256); - return 0; } -subsys_initcall(sha256_s390_mod_init); - -static void __exit sha256_s390_mod_exit(void) -{ -} -module_exit(sha256_s390_mod_exit); - -MODULE_LICENSE("GPL"); -MODULE_DESCRIPTION("SHA-256 using the CP Assist for Cryptographic Function= s (CPACF)"); diff --git a/lib/crypto/sha256-generic.c b/lib/crypto/sha256-generic.c deleted file mode 100644 index 99f904033c261..0000000000000 --- a/lib/crypto/sha256-generic.c +++ /dev/null @@ -1,150 +0,0 @@ -// SPDX-License-Identifier: GPL-2.0-or-later -/* - * SHA-256, as specified in - * http://csrc.nist.gov/groups/STM/cavp/documents/shs/sha256-384-512.pdf - * - * SHA-256 code by Jean-Luc Cooke . - * - * Copyright (c) Jean-Luc Cooke - * Copyright (c) Andrew McDonald - * Copyright (c) 2002 James Morris - * Copyright (c) 2014 Red Hat Inc. - */ - -#include -#include -#include -#include -#include -#include - -static const u32 SHA256_K[] =3D { - 0x428a2f98, 0x71374491, 0xb5c0fbcf, 0xe9b5dba5, - 0x3956c25b, 0x59f111f1, 0x923f82a4, 0xab1c5ed5, - 0xd807aa98, 0x12835b01, 0x243185be, 0x550c7dc3, - 0x72be5d74, 0x80deb1fe, 0x9bdc06a7, 0xc19bf174, - 0xe49b69c1, 0xefbe4786, 0x0fc19dc6, 0x240ca1cc, - 0x2de92c6f, 0x4a7484aa, 0x5cb0a9dc, 0x76f988da, - 0x983e5152, 0xa831c66d, 0xb00327c8, 0xbf597fc7, - 0xc6e00bf3, 0xd5a79147, 0x06ca6351, 0x14292967, - 0x27b70a85, 0x2e1b2138, 0x4d2c6dfc, 0x53380d13, - 0x650a7354, 0x766a0abb, 0x81c2c92e, 0x92722c85, - 0xa2bfe8a1, 0xa81a664b, 0xc24b8b70, 0xc76c51a3, - 0xd192e819, 0xd6990624, 0xf40e3585, 0x106aa070, - 0x19a4c116, 0x1e376c08, 0x2748774c, 0x34b0bcb5, - 0x391c0cb3, 0x4ed8aa4a, 0x5b9cca4f, 0x682e6ff3, - 0x748f82ee, 0x78a5636f, 0x84c87814, 0x8cc70208, - 0x90befffa, 0xa4506ceb, 0xbef9a3f7, 0xc67178f2, -}; - -static inline u32 Ch(u32 x, u32 y, u32 z) -{ - return z ^ (x & (y ^ z)); -} - -static inline u32 Maj(u32 x, u32 y, u32 z) -{ - return (x & y) | (z & (x | y)); -} - -#define e0(x) (ror32(x, 2) ^ ror32(x, 13) ^ ror32(x, 22)) -#define e1(x) (ror32(x, 6) ^ ror32(x, 11) ^ ror32(x, 25)) -#define s0(x) (ror32(x, 7) ^ ror32(x, 18) ^ (x >> 3)) -#define s1(x) (ror32(x, 17) ^ ror32(x, 19) ^ (x >> 10)) - -static inline void LOAD_OP(int I, u32 *W, const u8 *input) -{ - W[I] =3D get_unaligned_be32((__u32 *)input + I); -} - -static inline void BLEND_OP(int I, u32 *W) -{ - W[I] =3D s1(W[I-2]) + W[I-7] + s0(W[I-15]) + W[I-16]; -} - -#define SHA256_ROUND(i, a, b, c, d, e, f, g, h) do { \ - u32 t1, t2; \ - t1 =3D h + e1(e) + Ch(e, f, g) + SHA256_K[i] + W[i]; \ - t2 =3D e0(a) + Maj(a, b, c); \ - d +=3D t1; \ - h =3D t1 + t2; \ -} while (0) - -static void sha256_block_generic(struct sha256_block_state *state, - const u8 *input, u32 W[64]) -{ - u32 a, b, c, d, e, f, g, h; - int i; - - /* load the input */ - for (i =3D 0; i < 16; i +=3D 8) { - LOAD_OP(i + 0, W, input); - LOAD_OP(i + 1, W, input); - LOAD_OP(i + 2, W, input); - LOAD_OP(i + 3, W, input); - LOAD_OP(i + 4, W, input); - LOAD_OP(i + 5, W, input); - LOAD_OP(i + 6, W, input); - LOAD_OP(i + 7, W, input); - } - - /* now blend */ - for (i =3D 16; i < 64; i +=3D 8) { - BLEND_OP(i + 0, W); - BLEND_OP(i + 1, W); - BLEND_OP(i + 2, W); - BLEND_OP(i + 3, W); - BLEND_OP(i + 4, W); - BLEND_OP(i + 5, W); - BLEND_OP(i + 6, W); - BLEND_OP(i + 7, W); - } - - /* load the state into our registers */ - a =3D state->h[0]; - b =3D state->h[1]; - c =3D state->h[2]; - d =3D state->h[3]; - e =3D state->h[4]; - f =3D state->h[5]; - g =3D state->h[6]; - h =3D state->h[7]; - - /* now iterate */ - for (i =3D 0; i < 64; i +=3D 8) { - SHA256_ROUND(i + 0, a, b, c, d, e, f, g, h); - SHA256_ROUND(i + 1, h, a, b, c, d, e, f, g); - SHA256_ROUND(i + 2, g, h, a, b, c, d, e, f); - SHA256_ROUND(i + 3, f, g, h, a, b, c, d, e); - SHA256_ROUND(i + 4, e, f, g, h, a, b, c, d); - SHA256_ROUND(i + 5, d, e, f, g, h, a, b, c); - SHA256_ROUND(i + 6, c, d, e, f, g, h, a, b); - SHA256_ROUND(i + 7, b, c, d, e, f, g, h, a); - } - - state->h[0] +=3D a; - state->h[1] +=3D b; - state->h[2] +=3D c; - state->h[3] +=3D d; - state->h[4] +=3D e; - state->h[5] +=3D f; - state->h[6] +=3D g; - state->h[7] +=3D h; -} - -void sha256_blocks_generic(struct sha256_block_state *state, - const u8 *data, size_t nblocks) -{ - u32 W[64]; - - do { - sha256_block_generic(state, data, W); - data +=3D SHA256_BLOCK_SIZE; - } while (--nblocks); - - memzero_explicit(W, sizeof(W)); -} -EXPORT_SYMBOL_GPL(sha256_blocks_generic); - -MODULE_DESCRIPTION("SHA-256 Algorithm (generic implementation)"); -MODULE_LICENSE("GPL"); diff --git a/lib/crypto/sha256.c b/lib/crypto/sha256.c index 12b4b59052c4a..68936d5cd7745 100644 --- a/lib/crypto/sha256.c +++ b/lib/crypto/sha256.c @@ -4,19 +4,21 @@ * * Copyright (c) Jean-Luc Cooke * Copyright (c) Andrew McDonald * Copyright (c) 2002 James Morris * Copyright (c) 2014 Red Hat Inc. + * Copyright 2025 Google LLC */ =20 #include #include -#include +#include #include #include #include #include +#include #include =20 static const struct sha256_block_state sha224_iv =3D { .h =3D { SHA224_H0, SHA224_H1, SHA224_H2, SHA224_H3, @@ -29,30 +31,132 @@ static const struct sha256_block_state sha256_iv =3D { SHA256_H0, SHA256_H1, SHA256_H2, SHA256_H3, SHA256_H4, SHA256_H5, SHA256_H6, SHA256_H7, }, }; =20 -/* - * If __DISABLE_EXPORTS is defined, then this file is being compiled for a - * pre-boot environment. In that case, ignore the kconfig options, pull t= he - * generic code into the same translation unit, and use that only. - */ -#ifdef __DISABLE_EXPORTS -#include "sha256-generic.c" -#endif +static const u32 sha256_K[64] =3D { + 0x428a2f98, 0x71374491, 0xb5c0fbcf, 0xe9b5dba5, 0x3956c25b, 0x59f111f1, + 0x923f82a4, 0xab1c5ed5, 0xd807aa98, 0x12835b01, 0x243185be, 0x550c7dc3, + 0x72be5d74, 0x80deb1fe, 0x9bdc06a7, 0xc19bf174, 0xe49b69c1, 0xefbe4786, + 0x0fc19dc6, 0x240ca1cc, 0x2de92c6f, 0x4a7484aa, 0x5cb0a9dc, 0x76f988da, + 0x983e5152, 0xa831c66d, 0xb00327c8, 0xbf597fc7, 0xc6e00bf3, 0xd5a79147, + 0x06ca6351, 0x14292967, 0x27b70a85, 0x2e1b2138, 0x4d2c6dfc, 0x53380d13, + 0x650a7354, 0x766a0abb, 0x81c2c92e, 0x92722c85, 0xa2bfe8a1, 0xa81a664b, + 0xc24b8b70, 0xc76c51a3, 0xd192e819, 0xd6990624, 0xf40e3585, 0x106aa070, + 0x19a4c116, 0x1e376c08, 0x2748774c, 0x34b0bcb5, 0x391c0cb3, 0x4ed8aa4a, + 0x5b9cca4f, 0x682e6ff3, 0x748f82ee, 0x78a5636f, 0x84c87814, 0x8cc70208, + 0x90befffa, 0xa4506ceb, 0xbef9a3f7, 0xc67178f2, +}; + +#define Ch(x, y, z) ((z) ^ ((x) & ((y) ^ (z)))) +#define Maj(x, y, z) (((x) & (y)) | ((z) & ((x) | (y)))) +#define e0(x) (ror32((x), 2) ^ ror32((x), 13) ^ ror32((x), 22)) +#define e1(x) (ror32((x), 6) ^ ror32((x), 11) ^ ror32((x), 25)) +#define s0(x) (ror32((x), 7) ^ ror32((x), 18) ^ ((x) >> 3)) +#define s1(x) (ror32((x), 17) ^ ror32((x), 19) ^ ((x) >> 10)) + +static inline void LOAD_OP(int I, u32 *W, const u8 *input) +{ + W[I] =3D get_unaligned_be32((__u32 *)input + I); +} + +static inline void BLEND_OP(int I, u32 *W) +{ + W[I] =3D s1(W[I - 2]) + W[I - 7] + s0(W[I - 15]) + W[I - 16]; +} =20 -static inline bool sha256_purgatory(void) +#define SHA256_ROUND(i, a, b, c, d, e, f, g, h) \ + do { \ + u32 t1, t2; \ + t1 =3D h + e1(e) + Ch(e, f, g) + sha256_K[i] + W[i]; \ + t2 =3D e0(a) + Maj(a, b, c); \ + d +=3D t1; \ + h =3D t1 + t2; \ + } while (0) + +static void sha256_block_generic(struct sha256_block_state *state, + const u8 *input, u32 W[64]) { - return __is_defined(__DISABLE_EXPORTS); + u32 a, b, c, d, e, f, g, h; + int i; + + /* load the input */ + for (i =3D 0; i < 16; i +=3D 8) { + LOAD_OP(i + 0, W, input); + LOAD_OP(i + 1, W, input); + LOAD_OP(i + 2, W, input); + LOAD_OP(i + 3, W, input); + LOAD_OP(i + 4, W, input); + LOAD_OP(i + 5, W, input); + LOAD_OP(i + 6, W, input); + LOAD_OP(i + 7, W, input); + } + + /* now blend */ + for (i =3D 16; i < 64; i +=3D 8) { + BLEND_OP(i + 0, W); + BLEND_OP(i + 1, W); + BLEND_OP(i + 2, W); + BLEND_OP(i + 3, W); + BLEND_OP(i + 4, W); + BLEND_OP(i + 5, W); + BLEND_OP(i + 6, W); + BLEND_OP(i + 7, W); + } + + /* load the state into our registers */ + a =3D state->h[0]; + b =3D state->h[1]; + c =3D state->h[2]; + d =3D state->h[3]; + e =3D state->h[4]; + f =3D state->h[5]; + g =3D state->h[6]; + h =3D state->h[7]; + + /* now iterate */ + for (i =3D 0; i < 64; i +=3D 8) { + SHA256_ROUND(i + 0, a, b, c, d, e, f, g, h); + SHA256_ROUND(i + 1, h, a, b, c, d, e, f, g); + SHA256_ROUND(i + 2, g, h, a, b, c, d, e, f); + SHA256_ROUND(i + 3, f, g, h, a, b, c, d, e); + SHA256_ROUND(i + 4, e, f, g, h, a, b, c, d); + SHA256_ROUND(i + 5, d, e, f, g, h, a, b, c); + SHA256_ROUND(i + 6, c, d, e, f, g, h, a, b); + SHA256_ROUND(i + 7, b, c, d, e, f, g, h, a); + } + + state->h[0] +=3D a; + state->h[1] +=3D b; + state->h[2] +=3D c; + state->h[3] +=3D d; + state->h[4] +=3D e; + state->h[5] +=3D f; + state->h[6] +=3D g; + state->h[7] +=3D h; } =20 -static inline void sha256_blocks(struct sha256_block_state *state, - const u8 *data, size_t nblocks) +static void __maybe_unused +sha256_blocks_generic(struct sha256_block_state *state, + const u8 *data, size_t nblocks) { - sha256_choose_blocks(state->h, data, nblocks, sha256_purgatory(), false); + u32 W[64]; + + do { + sha256_block_generic(state, data, W); + data +=3D SHA256_BLOCK_SIZE; + } while (--nblocks); + + memzero_explicit(W, sizeof(W)); } =20 +#if defined(CONFIG_CRYPTO_LIB_SHA256_ARCH) && !defined(__DISABLE_EXPORTS) +#include "sha256.h" /* $(SRCARCH)/sha256.h */ +#else +#define sha256_blocks sha256_blocks_generic +#endif + static void __sha256_init(struct __sha256_ctx *ctx, const struct sha256_block_state *iv, u64 initial_bytecount) { ctx->state =3D *iv; @@ -271,7 +375,21 @@ void hmac_sha256_usingrawkey(const u8 *raw_key, size_t= raw_key_len, memzero_explicit(&key, sizeof(key)); } EXPORT_SYMBOL_GPL(hmac_sha256_usingrawkey); #endif /* !__DISABLE_EXPORTS */ =20 +#ifdef sha256_mod_init_arch +static int __init sha256_mod_init(void) +{ + sha256_mod_init_arch(); + return 0; +} +subsys_initcall(sha256_mod_init); + +static void __exit sha256_mod_exit(void) +{ +} +module_exit(sha256_mod_exit); +#endif + MODULE_DESCRIPTION("SHA-224, SHA-256, HMAC-SHA224, and HMAC-SHA256 library= functions"); MODULE_LICENSE("GPL"); diff --git a/lib/crypto/sparc/Kconfig b/lib/crypto/sparc/Kconfig deleted file mode 100644 index e5c3e4d3dba62..0000000000000 --- a/lib/crypto/sparc/Kconfig +++ /dev/null @@ -1,8 +0,0 @@ -# SPDX-License-Identifier: GPL-2.0-only - -config CRYPTO_SHA256_SPARC64 - tristate - depends on SPARC64 - default CRYPTO_LIB_SHA256 - select CRYPTO_ARCH_HAVE_LIB_SHA256 - select CRYPTO_LIB_SHA256_GENERIC diff --git a/lib/crypto/sparc/Makefile b/lib/crypto/sparc/Makefile deleted file mode 100644 index 75ee244ad6f79..0000000000000 --- a/lib/crypto/sparc/Makefile +++ /dev/null @@ -1,4 +0,0 @@ -# SPDX-License-Identifier: GPL-2.0-only - -obj-$(CONFIG_CRYPTO_SHA256_SPARC64) +=3D sha256-sparc64.o -sha256-sparc64-y :=3D sha256.o sha256_asm.o diff --git a/lib/crypto/sparc/sha256.c b/lib/crypto/sparc/sha256.h similarity index 62% rename from lib/crypto/sparc/sha256.c rename to lib/crypto/sparc/sha256.h index f41c109c1c18d..1d10108eb1954 100644 --- a/lib/crypto/sparc/sha256.c +++ b/lib/crypto/sparc/sha256.h @@ -1,58 +1,43 @@ -// SPDX-License-Identifier: GPL-2.0-only +/* SPDX-License-Identifier: GPL-2.0-only */ /* * SHA-256 accelerated using the sparc64 sha256 opcodes * * Copyright (c) Jean-Luc Cooke * Copyright (c) Andrew McDonald * Copyright (c) 2002 James Morris * SHA224 Support Copyright 2007 Intel Corporation */ =20 -#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt - #include #include #include -#include -#include -#include =20 static __ro_after_init DEFINE_STATIC_KEY_FALSE(have_sha256_opcodes); =20 asmlinkage void sha256_sparc64_transform(struct sha256_block_state *state, const u8 *data, size_t nblocks); =20 -void sha256_blocks_arch(struct sha256_block_state *state, - const u8 *data, size_t nblocks) +static void sha256_blocks(struct sha256_block_state *state, + const u8 *data, size_t nblocks) { if (static_branch_likely(&have_sha256_opcodes)) sha256_sparc64_transform(state, data, nblocks); else sha256_blocks_generic(state, data, nblocks); } -EXPORT_SYMBOL_GPL(sha256_blocks_arch); =20 -static int __init sha256_sparc64_mod_init(void) +#define sha256_mod_init_arch sha256_mod_init_arch +static inline void sha256_mod_init_arch(void) { unsigned long cfr; =20 if (!(sparc64_elf_hwcap & HWCAP_SPARC_CRYPTO)) - return 0; + return; =20 __asm__ __volatile__("rd %%asr26, %0" : "=3Dr" (cfr)); if (!(cfr & CFR_SHA256)) - return 0; + return; =20 static_branch_enable(&have_sha256_opcodes); pr_info("Using sparc64 sha256 opcode optimized SHA-256/SHA-224 implementa= tion\n"); - return 0; } -subsys_initcall(sha256_sparc64_mod_init); - -static void __exit sha256_sparc64_mod_exit(void) -{ -} -module_exit(sha256_sparc64_mod_exit); - -MODULE_LICENSE("GPL"); -MODULE_DESCRIPTION("SHA-256 accelerated using the sparc64 sha256 opcodes"); diff --git a/lib/crypto/x86/Kconfig b/lib/crypto/x86/Kconfig index e344579db3d85..546fe2afe0b51 100644 --- a/lib/crypto/x86/Kconfig +++ b/lib/crypto/x86/Kconfig @@ -22,12 +22,5 @@ config CRYPTO_CHACHA20_X86_64 config CRYPTO_POLY1305_X86_64 tristate depends on 64BIT default CRYPTO_LIB_POLY1305 select CRYPTO_ARCH_HAVE_LIB_POLY1305 - -config CRYPTO_SHA256_X86_64 - tristate - depends on 64BIT - default CRYPTO_LIB_SHA256 - select CRYPTO_ARCH_HAVE_LIB_SHA256 - select CRYPTO_LIB_SHA256_GENERIC diff --git a/lib/crypto/x86/Makefile b/lib/crypto/x86/Makefile index abceca3d31c01..c2ff8c5f1046e 100644 --- a/lib/crypto/x86/Makefile +++ b/lib/crypto/x86/Makefile @@ -8,13 +8,10 @@ chacha-x86_64-y :=3D chacha-avx2-x86_64.o chacha-ssse3-x8= 6_64.o chacha-avx512vl-x8 =20 obj-$(CONFIG_CRYPTO_POLY1305_X86_64) +=3D poly1305-x86_64.o poly1305-x86_64-y :=3D poly1305-x86_64-cryptogams.o poly1305_glue.o targets +=3D poly1305-x86_64-cryptogams.S =20 -obj-$(CONFIG_CRYPTO_SHA256_X86_64) +=3D sha256-x86_64.o -sha256-x86_64-y :=3D sha256.o sha256-ssse3-asm.o sha256-avx-asm.o sha256-a= vx2-asm.o sha256-ni-asm.o - quiet_cmd_perlasm =3D PERLASM $@ cmd_perlasm =3D $(PERL) $< > $@ =20 $(obj)/%.S: $(src)/%.pl FORCE $(call if_changed,perlasm) diff --git a/lib/crypto/x86/sha256.c b/lib/crypto/x86/sha256.h similarity index 70% rename from lib/crypto/x86/sha256.c rename to lib/crypto/x86/sha256.h index 9ee38d2b3d572..3b5456c222ba6 100644 --- a/lib/crypto/x86/sha256.c +++ b/lib/crypto/x86/sha256.h @@ -1,16 +1,13 @@ -// SPDX-License-Identifier: GPL-2.0-or-later +/* SPDX-License-Identifier: GPL-2.0-or-later */ /* * SHA-256 optimized for x86_64 * * Copyright 2025 Google LLC */ #include -#include #include -#include -#include #include =20 asmlinkage void sha256_transform_ssse3(struct sha256_block_state *state, const u8 *data, size_t nblocks); asmlinkage void sha256_transform_avx(struct sha256_block_state *state, @@ -22,47 +19,37 @@ asmlinkage void sha256_ni_transform(struct sha256_block= _state *state, =20 static __ro_after_init DEFINE_STATIC_KEY_FALSE(have_sha256_x86); =20 DEFINE_STATIC_CALL(sha256_blocks_x86, sha256_transform_ssse3); =20 -void sha256_blocks_arch(struct sha256_block_state *state, - const u8 *data, size_t nblocks) +static void sha256_blocks(struct sha256_block_state *state, + const u8 *data, size_t nblocks) { if (static_branch_likely(&have_sha256_x86) && crypto_simd_usable()) { kernel_fpu_begin(); static_call(sha256_blocks_x86)(state, data, nblocks); kernel_fpu_end(); } else { sha256_blocks_generic(state, data, nblocks); } } -EXPORT_SYMBOL_GPL(sha256_blocks_arch); =20 -static int __init sha256_x86_mod_init(void) +#define sha256_mod_init_arch sha256_mod_init_arch +static inline void sha256_mod_init_arch(void) { if (boot_cpu_has(X86_FEATURE_SHA_NI)) { static_call_update(sha256_blocks_x86, sha256_ni_transform); - } else if (cpu_has_xfeatures(XFEATURE_MASK_SSE | - XFEATURE_MASK_YMM, NULL) && + } else if (cpu_has_xfeatures(XFEATURE_MASK_SSE | XFEATURE_MASK_YMM, + NULL) && boot_cpu_has(X86_FEATURE_AVX)) { if (boot_cpu_has(X86_FEATURE_AVX2) && boot_cpu_has(X86_FEATURE_BMI2)) static_call_update(sha256_blocks_x86, sha256_transform_rorx); else static_call_update(sha256_blocks_x86, sha256_transform_avx); } else if (!boot_cpu_has(X86_FEATURE_SSSE3)) { - return 0; + return; } static_branch_enable(&have_sha256_x86); - return 0; } -subsys_initcall(sha256_x86_mod_init); - -static void __exit sha256_x86_mod_exit(void) -{ -} -module_exit(sha256_x86_mod_exit); - -MODULE_LICENSE("GPL"); -MODULE_DESCRIPTION("SHA-256 optimized for x86_64"); --=20 2.50.0 From nobody Sun Sep 7 11:34:13 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 9A80E2DF3F2; Mon, 30 Jun 2025 16:09:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751299767; cv=none; b=E/am9IFqg3QmhGaB9LDD137+e2jEigfa4253hGNuz3Kfwu/K1aSwvfuiRNczSCZnlG8vNgVkInKdZI5rVZKV9Wr6eJvuvMXSWJAW1CmHSL8OcgThzA16n1rWP6fBpptdsYdggDGooHnjUhknJWded1MQpK3m957DeM56ceZFx3k= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751299767; c=relaxed/simple; bh=hI7/9MXXn63qxg1DA2W2CAE23ZWS/FopAAuRjD/xhFs=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=F9JjkLiN4rygUuCCv8nzepMt+H+ALVY0pdBzgUw138nzFhsP10Z5zGuNgx51j7Hmv9HORR13WCLZscdoTj7lhzr5BUFqc5Byug8RzJGSCKkATdU0H/Hhby3m2rXHtxMBJHn1aVqBdtq6iMA1sOOokoHe2AQSv8FLrEEJF5UBfkA= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=oN5D7nM6; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="oN5D7nM6" Received: by smtp.kernel.org (Postfix) with ESMTPSA id D4921C4CEF2; Mon, 30 Jun 2025 16:09:26 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1751299767; bh=hI7/9MXXn63qxg1DA2W2CAE23ZWS/FopAAuRjD/xhFs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=oN5D7nM68TGKPGZYZfUCmR2idT6QSU/9V6Bm4Nd+BH4W3ccbIdeb6eX7QizYZe7ce b1C5ugHHJ0UyxkvLvqKHVwgxGKT2tGHhB6SkNjeJGS9EyxEhNmgQq5gaTCKepPBrbJ 5/KKuQsavBRZWjviiWq9Uc2bjEiTbU9DMXo7pTiwZtgmTIrc/Rb1onnm2pYpEKE2nL jc5sTkLpA1W6qHTjKTlNmOXhKDDJcBdsrgZFT2ZZcVkciO2+Tsu4E2Cq1YITmIm7m9 uWHqbWEOtWiMOYhvUlspb4D980aEHInjhTTJMzcxAAfrmqYalRCh3cKOmAFJLmFBiN uueT2Z+/VqTzA== From: Eric Biggers To: linux-crypto@vger.kernel.org Cc: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, sparclinux@vger.kernel.org, x86@kernel.org, Ard Biesheuvel , "Jason A . Donenfeld" , Eric Biggers Subject: [PATCH v2 13/14] lib/crypto: sha256: Sync sha256_update() with sha512_update() Date: Mon, 30 Jun 2025 09:06:44 -0700 Message-ID: <20250630160645.3198-14-ebiggers@kernel.org> X-Mailer: git-send-email 2.50.0 In-Reply-To: <20250630160645.3198-1-ebiggers@kernel.org> References: <20250630160645.3198-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" The BLOCK_HASH_UPDATE_BLOCKS macro is difficult to read. For now, let's just write the update explicitly in the straightforward way, mirroring sha512_update(). It's possible that we'll bring back a macro for this later, but it needs to be properly justified and hopefully a bit more readable. Signed-off-by: Eric Biggers Acked-by: Ard Biesheuvel --- lib/crypto/sha256.c | 28 +++++++++++++++++++++++++--- 1 file changed, 25 insertions(+), 3 deletions(-) diff --git a/lib/crypto/sha256.c b/lib/crypto/sha256.c index 68936d5cd7745..808438d4f4278 100644 --- a/lib/crypto/sha256.c +++ b/lib/crypto/sha256.c @@ -8,11 +8,10 @@ * Copyright (c) 2014 Red Hat Inc. * Copyright 2025 Google LLC */ =20 #include -#include #include #include #include #include #include @@ -178,12 +177,35 @@ EXPORT_SYMBOL_GPL(sha256_init); void __sha256_update(struct __sha256_ctx *ctx, const u8 *data, size_t len) { size_t partial =3D ctx->bytecount % SHA256_BLOCK_SIZE; =20 ctx->bytecount +=3D len; - BLOCK_HASH_UPDATE_BLOCKS(sha256_blocks, &ctx->state, data, len, - SHA256_BLOCK_SIZE, ctx->buf, partial); + + if (partial + len >=3D SHA256_BLOCK_SIZE) { + size_t nblocks; + + if (partial) { + size_t l =3D SHA256_BLOCK_SIZE - partial; + + memcpy(&ctx->buf[partial], data, l); + data +=3D l; + len -=3D l; + + sha256_blocks(&ctx->state, ctx->buf, 1); + } + + nblocks =3D len / SHA256_BLOCK_SIZE; + len %=3D SHA256_BLOCK_SIZE; + + if (nblocks) { + sha256_blocks(&ctx->state, data, nblocks); + data +=3D nblocks * SHA256_BLOCK_SIZE; + } + partial =3D 0; + } + if (len) + memcpy(&ctx->buf[partial], data, len); } EXPORT_SYMBOL(__sha256_update); =20 static void __sha256_final(struct __sha256_ctx *ctx, u8 *out, size_t digest_size) --=20 2.50.0 From nobody Sun Sep 7 11:34:13 2025 Received: from smtp.kernel.org (aws-us-west-2-korg-mail-1.web.codeaurora.org [10.30.226.201]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 3D01D2900AD; Mon, 30 Jun 2025 16:09:27 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=10.30.226.201 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751299768; cv=none; b=LVoVWk9JZKSI4moBeJ9jm/vrSRlW6tS6fviY8vPgv6/zrXBlvRxyzk7Qf7edoa8aycL4b7GwObuhuR6PQUEMlKJOu1XjG4Q3G9QxViLlcRfvNuVeP6gpDoUPknwPNjKqNtUIGEXq4yUwsHDRCbOc0CLN0McE7fGR5S9hCm8aqtI= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1751299768; c=relaxed/simple; bh=pkoFPfsdz5mUfc5I4FK+MnyFLsCjGsnnD9jc5ME9fN4=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=Na8mjWQpbhI8bPB3kwceZxUUFP+pIR20reE4hxS3/rD3ByQCjpKOHc5Cwv9quMBhYwqPD4VHLfobEZh5tFmQwfc3igIT/di0iWSlvIUntBPenfD74Ep6gIGPW7S/YMqZ5fSjYbynoaLAUoAijI81qOLL9PzHA0w3nyNt3YyFlXs= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b=jnBxR32k; arc=none smtp.client-ip=10.30.226.201 Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=kernel.org header.i=@kernel.org header.b="jnBxR32k" Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5EC2DC4CEF3; Mon, 30 Jun 2025 16:09:27 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=k20201202; t=1751299767; bh=pkoFPfsdz5mUfc5I4FK+MnyFLsCjGsnnD9jc5ME9fN4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=jnBxR32keuveFDD/erORneyVQFZMExFVwQ2m1PRHBM8e09zhVEcpdxXezXwhIRIep 1b/hAsrsEPE4GO34uoemDkgD6DfhLn2c+0We3yqTqkDW1Zdqk+usXyKw0ahpO/QyDk kq8OJiVNVNYv+79sZjwcLRLPE8u+AXxrFrb5rUo1EG4MgLQWKf4VRjl6PqF6ZW6RvL tO/4ewF4TqSoMw0NJ2cWpzPbScaI6j0B3laWfK7QYlGoz9vuy6ps6Y8wwL6bttBzRT i6YDWFUv5OhMZDpFhfE2wkKZC04+niDSZ+wBsXg2HLi3z3ji0M2HuML4L/kX6ufOLl sxIXnWBzurdOw== From: Eric Biggers To: linux-crypto@vger.kernel.org Cc: linux-kernel@vger.kernel.org, linux-arm-kernel@lists.infradead.org, linux-mips@vger.kernel.org, linuxppc-dev@lists.ozlabs.org, linux-riscv@lists.infradead.org, linux-s390@vger.kernel.org, sparclinux@vger.kernel.org, x86@kernel.org, Ard Biesheuvel , "Jason A . Donenfeld" , Eric Biggers Subject: [PATCH v2 14/14] lib/crypto: sha256: Document the SHA-224 and SHA-256 API Date: Mon, 30 Jun 2025 09:06:45 -0700 Message-ID: <20250630160645.3198-15-ebiggers@kernel.org> X-Mailer: git-send-email 2.50.0 In-Reply-To: <20250630160645.3198-1-ebiggers@kernel.org> References: <20250630160645.3198-1-ebiggers@kernel.org> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add kerneldoc comments, consistent with the kerneldoc comments of the SHA-384 and SHA-512 API. Signed-off-by: Eric Biggers Acked-by: Ard Biesheuvel --- include/crypto/sha2.h | 76 +++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 76 insertions(+) diff --git a/include/crypto/sha2.h b/include/crypto/sha2.h index 2e3fc2cf4aa0d..e0a08f6addd00 100644 --- a/include/crypto/sha2.h +++ b/include/crypto/sha2.h @@ -153,17 +153,55 @@ void __hmac_sha256_init(struct __hmac_sha256_ctx *ctx, */ struct sha224_ctx { struct __sha256_ctx ctx; }; =20 +/** + * sha224_init() - Initialize a SHA-224 context for a new message + * @ctx: the context to initialize + * + * If you don't need incremental computation, consider sha224() instead. + * + * Context: Any context. + */ void sha224_init(struct sha224_ctx *ctx); + +/** + * sha224_update() - Update a SHA-224 context with message data + * @ctx: the context to update; must have been initialized + * @data: the message data + * @len: the data length in bytes + * + * This can be called any number of times. + * + * Context: Any context. + */ static inline void sha224_update(struct sha224_ctx *ctx, const u8 *data, size_t len) { __sha256_update(&ctx->ctx, data, len); } + +/** + * sha224_final() - Finish computing a SHA-224 message digest + * @ctx: the context to finalize; must have been initialized + * @out: (output) the resulting SHA-224 message digest + * + * After finishing, this zeroizes @ctx. So the caller does not need to do= it. + * + * Context: Any context. + */ void sha224_final(struct sha224_ctx *ctx, u8 out[SHA224_DIGEST_SIZE]); + +/** + * sha224() - Compute SHA-224 message digest in one shot + * @data: the message data + * @len: the data length in bytes + * @out: (output) the resulting SHA-224 message digest + * + * Context: Any context. + */ void sha224(const u8 *data, size_t len, u8 out[SHA224_DIGEST_SIZE]); =20 /** * struct hmac_sha224_key - Prepared key for HMAC-SHA224 * @key: private @@ -273,17 +311,55 @@ void hmac_sha224_usingrawkey(const u8 *raw_key, size_= t raw_key_len, */ struct sha256_ctx { struct __sha256_ctx ctx; }; =20 +/** + * sha256_init() - Initialize a SHA-256 context for a new message + * @ctx: the context to initialize + * + * If you don't need incremental computation, consider sha256() instead. + * + * Context: Any context. + */ void sha256_init(struct sha256_ctx *ctx); + +/** + * sha256_update() - Update a SHA-256 context with message data + * @ctx: the context to update; must have been initialized + * @data: the message data + * @len: the data length in bytes + * + * This can be called any number of times. + * + * Context: Any context. + */ static inline void sha256_update(struct sha256_ctx *ctx, const u8 *data, size_t len) { __sha256_update(&ctx->ctx, data, len); } + +/** + * sha256_final() - Finish computing a SHA-256 message digest + * @ctx: the context to finalize; must have been initialized + * @out: (output) the resulting SHA-256 message digest + * + * After finishing, this zeroizes @ctx. So the caller does not need to do= it. + * + * Context: Any context. + */ void sha256_final(struct sha256_ctx *ctx, u8 out[SHA256_DIGEST_SIZE]); + +/** + * sha256() - Compute SHA-256 message digest in one shot + * @data: the message data + * @len: the data length in bytes + * @out: (output) the resulting SHA-256 message digest + * + * Context: Any context. + */ void sha256(const u8 *data, size_t len, u8 out[SHA256_DIGEST_SIZE]); =20 /** * struct hmac_sha256_key - Prepared key for HMAC-SHA256 * @key: private --=20 2.50.0