From nobody Sun Nov 24 02:43:15 2024 Received: from mx1.sberdevices.ru (mx1.sberdevices.ru [37.18.73.165]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 605F81E32B0; Fri, 8 Nov 2024 10:29:48 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=37.18.73.165 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731061790; cv=none; b=t7XSIE8aB+ueG/2FmUpYehRySAk/gxdNQTKIg85eUHLm3RQ8Q5Klt6+/bnLZGm0ZZrakZDEQU5YQMArxCp0KySccoF7jLj6H3hxziEQMQywOs8zE6jonLrGvxDPTX+yBmVMerOcxVo5LNEy5FaiG77zwNd/oQqErWwG/71OQiaU= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1731061790; c=relaxed/simple; bh=0Pl2+0JxSP1nj6EGiK4aObCVCToyc+3MOcjv/plf760=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=nRD9MNLNOa8HwI+5ONRNeCWKG1mTS9S/6ZR4qlSiHuNafmXISWzW7WIVgF0umDG11mCRMPyH06MqIQQ3Jp8o4MPCB2Ro+uliQ7BS8EKRV/SdaYWMrSbJBU4aI2eHTA2q+OQFM4dBoKornPs0AgddCFxjQS1xQvulGNfu6QqTNlU= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=salutedevices.com; spf=pass smtp.mailfrom=salutedevices.com; dkim=pass (2048-bit key) header.d=salutedevices.com header.i=@salutedevices.com header.b=AwNGoNqN; arc=none smtp.client-ip=37.18.73.165 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=salutedevices.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=salutedevices.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=salutedevices.com header.i=@salutedevices.com header.b="AwNGoNqN" Received: from p-infra-ksmg-sc-msk01.sberdevices.ru (localhost [127.0.0.1]) by mx1.sberdevices.ru (Postfix) with ESMTP id 692F310002D; Fri, 8 Nov 2024 13:29:46 +0300 (MSK) DKIM-Filter: OpenDKIM Filter v2.11.0 mx1.sberdevices.ru 692F310002D DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=salutedevices.com; s=mail; t=1731061786; bh=71F69EjjYw6pQ9S6ZsLH9ISI+VpqN6RPS1BZZBCkkzc=; h=From:To:Subject:Date:Message-ID:MIME-Version:Content-Type:From; b=AwNGoNqNrjHIy1qjMOKbe/lTeyCjr/GiOQAsZLc0XAUBuGSgoYStK3Bq3p9FGXKN5 yVg5Sqh2jGUoue4Ej8kqJy32Lal9RACbSZCTm6Rws+ljsmbD5FPAar1GdUw1M104ui I4NTTfbuXjarq6KC5efRb0qZYCZiH0XPrA2GZsk4LqvmnA6BQbTKMwZQEyEsM01X4Y 9MgQeiPD2fX8BtNUqtIhBu6/y1+3VNcw6pqVdrkg+BTvZogb/r+RMQEH1t7BbcWybc kX8aLz8wEJR9b4vnopkH6XFJj1/HGq0O334qwkV9LjpAo2D3PuPqHVyIHoFysqfMgG Wb7X5TntCeqNA== Received: from smtp.sberdevices.ru (p-i-exch-sc-m02.sberdevices.ru [172.16.192.103]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.sberdevices.ru (Postfix) with ESMTPS; Fri, 8 Nov 2024 13:29:46 +0300 (MSK) From: Alexey Romanov To: , , , , , , , , , , , CC: , , , , , , "Alexey Romanov" Subject: [PATCH v10 11/22] crypto: amlogic - Introduce hasher Date: Fri, 8 Nov 2024 13:28:56 +0300 Message-ID: <20241108102907.1788584-12-avromanov@salutedevices.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20241108102907.1788584-1-avromanov@salutedevices.com> References: <20241108102907.1788584-1-avromanov@salutedevices.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-ClientProxiedBy: p-i-exch-a-m2.sberdevices.ru (172.24.196.120) To p-i-exch-a-m1.sberdevices.ru (172.24.196.116) X-KSMG-Rule-ID: 10 X-KSMG-Message-Action: clean X-KSMG-AntiSpam-Lua-Profiles: 189039 [Nov 08 2024] X-KSMG-AntiSpam-Version: 6.1.1.7 X-KSMG-AntiSpam-Envelope-From: avromanov@salutedevices.com X-KSMG-AntiSpam-Rate: 0 X-KSMG-AntiSpam-Status: not_detected X-KSMG-AntiSpam-Method: none X-KSMG-AntiSpam-Auth: dkim=none X-KSMG-AntiSpam-Info: LuaCore: 41 0.3.41 623e98d5198769c015c72f45fabbb9f77bdb702b, {Tracking_from_domain_doesnt_match_to}, smtp.sberdevices.ru:5.0.1,7.1.1;127.0.0.199:7.1.2;salutedevices.com:7.1.1;d41d8cd98f00b204e9800998ecf8427e.com:7.1.1, FromAlignment: s X-MS-Exchange-Organization-SCL: -1 X-KSMG-AntiSpam-Interceptor-Info: scan successful X-KSMG-AntiPhishing: Clean X-KSMG-LinksScanning: Clean X-KSMG-AntiVirus: Kaspersky Secure Mail Gateway, version 2.0.1.6960, bases: 2024/11/08 08:34:00 #26834472 X-KSMG-AntiVirus-Status: Clean, skipped Content-Type: text/plain; charset="utf-8" Introduce support for SHA1/SHA224/SHA256 hash algos. Tested via tcrypt and custom tests. Signed-off-by: Alexey Romanov --- drivers/crypto/amlogic/Makefile | 2 +- drivers/crypto/amlogic/amlogic-gxl-core.c | 25 +- drivers/crypto/amlogic/amlogic-gxl-hasher.c | 507 ++++++++++++++++++++ drivers/crypto/amlogic/amlogic-gxl.h | 48 ++ 4 files changed, 580 insertions(+), 2 deletions(-) create mode 100644 drivers/crypto/amlogic/amlogic-gxl-hasher.c diff --git a/drivers/crypto/amlogic/Makefile b/drivers/crypto/amlogic/Makef= ile index 39057e62c13e..4b6b388b7880 100644 --- a/drivers/crypto/amlogic/Makefile +++ b/drivers/crypto/amlogic/Makefile @@ -1,2 +1,2 @@ obj-$(CONFIG_CRYPTO_DEV_AMLOGIC_GXL) +=3D amlogic-gxl-crypto.o -amlogic-gxl-crypto-y :=3D amlogic-gxl-core.o amlogic-gxl-cipher.o +amlogic-gxl-crypto-y :=3D amlogic-gxl-core.o amlogic-gxl-cipher.o amlogic-= gxl-hasher.o diff --git a/drivers/crypto/amlogic/amlogic-gxl-core.c b/drivers/crypto/aml= ogic/amlogic-gxl-core.c index c1c445239549..706db22b9f65 100644 --- a/drivers/crypto/amlogic/amlogic-gxl-core.c +++ b/drivers/crypto/amlogic/amlogic-gxl-core.c @@ -19,6 +19,9 @@ #include #include #include +#include +#include +#include =20 #include "amlogic-gxl.h" =20 @@ -172,6 +175,15 @@ int meson_register_algs(struct meson_dev *mc, struct m= eson_alg_template *algs, return err; } break; + case CRYPTO_ALG_TYPE_AHASH: + err =3D crypto_engine_register_ahash(&algs[i].alg.ahash); + if (err) { + dev_err(mc->dev, "Fail to register %s\n", + algs[i].alg.ahash.base.halg.base.cra_name); + meson_unregister_algs(mc, algs, count); + return err; + } + break; } } =20 @@ -190,6 +202,9 @@ void meson_unregister_algs(struct meson_dev *mc, struct= meson_alg_template *algs case CRYPTO_ALG_TYPE_SKCIPHER: crypto_engine_unregister_skcipher(&algs[i].alg.skcipher); break; + case CRYPTO_ALG_TYPE_AHASH: + crypto_engine_unregister_ahash(&algs[i].alg.ahash); + break; } } } @@ -227,13 +242,20 @@ static int meson_crypto_probe(struct platform_device = *pdev) =20 dbgfs_dir =3D debugfs_create_dir("gxl-crypto", NULL); debugfs_create_file("stats", 0444, dbgfs_dir, mc, &meson_debugfs_fops); - #ifdef CONFIG_CRYPTO_DEV_AMLOGIC_GXL_DEBUG mc->dbgfs_dir =3D dbgfs_dir; #endif } =20 + err =3D meson_hasher_register(mc); + if (err) + goto error_hasher; + return 0; + +error_hasher: + meson_cipher_unregister(mc); + error_flow: meson_free_chanlist(mc, mc->flow_cnt - 1); return err; @@ -256,6 +278,7 @@ static const struct meson_pdata meson_gxl_pdata =3D { .descs_reg =3D 0x0, .status_reg =3D 0x4, .setup_desc_cnt =3D 3, + .hasher_supported =3D false, }; =20 static const struct of_device_id meson_crypto_of_match_table[] =3D { diff --git a/drivers/crypto/amlogic/amlogic-gxl-hasher.c b/drivers/crypto/a= mlogic/amlogic-gxl-hasher.c new file mode 100644 index 000000000000..ffc1b23438be --- /dev/null +++ b/drivers/crypto/amlogic/amlogic-gxl-hasher.c @@ -0,0 +1,507 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Hardware asynchronous hasher for Amlogic SoC's. + * + * Copyright (c) 2023, SaluteDevices. All Rights Reserved. + * + * Author: Alexey Romanov + */ + +#include +#include +#include +#include +#include +#include +#include + +#include "amlogic-gxl.h" + +static int meson_sha_init(struct ahash_request *req) +{ + struct crypto_ahash *tfm =3D crypto_ahash_reqtfm(req); + struct meson_hasher_tfm_ctx *tctx =3D crypto_ahash_ctx_dma(tfm); + struct meson_hasher_req_ctx *rctx =3D ahash_request_ctx(req); + + memset(rctx, 0, sizeof(struct meson_hasher_req_ctx)); + + rctx->flow =3D meson_get_engine_number(tctx->mc); + rctx->begin_req =3D true; + + return 0; +} + +static int meson_sha_update(struct ahash_request *req) +{ + struct crypto_ahash *tfm =3D crypto_ahash_reqtfm(req); + struct meson_hasher_tfm_ctx *tctx =3D crypto_ahash_ctx_dma(tfm); + struct meson_hasher_req_ctx *rctx =3D ahash_request_ctx(req); + struct crypto_engine *engine =3D tctx->mc->chanlist[rctx->flow].engine; + + return crypto_transfer_hash_request_to_engine(engine, req); +} + +static int meson_sha_final(struct ahash_request *req) +{ + struct crypto_ahash *tfm =3D crypto_ahash_reqtfm(req); + struct meson_hasher_tfm_ctx *tctx =3D crypto_ahash_ctx_dma(tfm); + struct meson_hasher_req_ctx *rctx =3D ahash_request_ctx(req); + struct crypto_engine *engine =3D tctx->mc->chanlist[rctx->flow].engine; + + rctx->final_req =3D true; + + return crypto_transfer_hash_request_to_engine(engine, req); +} + +static int meson_sha_digest(struct ahash_request *req) +{ + struct crypto_wait wait; + int ret; + + crypto_init_wait(&wait); + ahash_request_set_callback(req, CRYPTO_TFM_REQ_MAY_SLEEP | + CRYPTO_TFM_REQ_MAY_BACKLOG, + crypto_req_done, &wait); + + meson_sha_init(req); + + ret =3D crypto_wait_req(meson_sha_update(req), &wait); + if (ret) + return ret; + + return crypto_wait_req(meson_sha_final(req), &wait); +} + +static int meson_hasher_req_map(struct ahash_request *req) +{ + struct crypto_ahash *tfm =3D crypto_ahash_reqtfm(req); + struct meson_hasher_tfm_ctx *tctx =3D crypto_ahash_ctx_dma(tfm); + struct meson_dev *mc =3D tctx->mc; + int ret; + + if (!req->nbytes) + return 0; + + ret =3D dma_map_sg(mc->dev, req->src, sg_nents(req->src), DMA_TO_DEVICE); + if (!ret) { + dev_err(mc->dev, "Cannot DMA MAP request data\n"); + return -ENOMEM; + } + + return 0; +} + +static void meson_hasher_req_unmap(struct ahash_request *req) +{ + struct crypto_ahash *tfm =3D crypto_ahash_reqtfm(req); + struct meson_hasher_tfm_ctx *tctx =3D crypto_ahash_ctx_dma(tfm); + struct meson_dev *mc =3D tctx->mc; + + if (!req->nbytes) + return; + + dma_unmap_sg(mc->dev, req->src, sg_nents(req->src), DMA_TO_DEVICE); +} + +struct hasher_ctx { + struct crypto_async_request *areq; + + unsigned int tloffset; + unsigned int nbytes; + unsigned int todo; + + dma_addr_t state_addr; + dma_addr_t src_addr; + unsigned int src_offset; + struct scatterlist *src_sg; +}; + +static bool meson_final(struct hasher_ctx *ctx) +{ + struct ahash_request *req =3D ahash_request_cast(ctx->areq); + struct meson_hasher_req_ctx *rctx =3D ahash_request_ctx(req); + + return !ctx->nbytes && rctx->final_req; +} + +static int meson_fill_partial_buffer(struct hasher_ctx *ctx, unsigned int = len) +{ + struct ahash_request *req =3D ahash_request_cast(ctx->areq); + struct crypto_ahash *tfm =3D crypto_ahash_reqtfm(req); + struct meson_hasher_tfm_ctx *tctx =3D crypto_ahash_ctx_dma(tfm); + struct meson_hasher_req_ctx *rctx =3D ahash_request_ctx(req); + struct meson_dev *mc =3D tctx->mc; + unsigned int blocksize =3D crypto_ahash_blocksize(tfm); + unsigned int copy; + + if (len) { + copy =3D min(blocksize - rctx->partial_size, len); + memcpy(rctx->partial + rctx->partial_size, + sg_virt(ctx->src_sg) + ctx->src_offset, copy); + + rctx->partial_size +=3D copy; + ctx->nbytes -=3D copy; + ctx->src_offset +=3D copy; + } + + if (rctx->partial_size =3D=3D blocksize || meson_final(ctx)) { + rctx->partial_addr =3D dma_map_single(mc->dev, + rctx->partial, + rctx->partial_size, + DMA_TO_DEVICE); + if (dma_mapping_error(mc->dev, rctx->partial_addr)) { + dev_err(mc->dev, "Cannot DMA MAP SHA partial buffer\n"); + return -ENOMEM; + } + + rctx->partial_mapped =3D true; + ctx->todo =3D rctx->partial_size; + ctx->src_addr =3D rctx->partial_addr; + } + + return 0; +} + +static unsigned int meson_setup_data_descs(struct hasher_ctx *ctx) +{ + struct ahash_request *req =3D ahash_request_cast(ctx->areq); + struct meson_hasher_req_ctx *rctx =3D ahash_request_ctx(req); + struct crypto_ahash *tfm =3D crypto_ahash_reqtfm(req); + struct meson_hasher_tfm_ctx *tctx =3D crypto_ahash_ctx_dma(tfm); + struct meson_dev *mc =3D tctx->mc; + struct meson_flow *flow =3D &mc->chanlist[rctx->flow]; + struct hash_alg_common *alg =3D crypto_hash_alg_common(tfm); + struct meson_alg_template *algt =3D container_of(alg, + struct meson_alg_template, alg.ahash.base.halg); + struct meson_desc *desc =3D &flow->tl[ctx->tloffset]; + u32 v; + + ctx->tloffset++; + + v =3D DESC_OWN | DESC_ENCRYPTION | DESC_OPMODE_SHA | + ctx->todo | algt->blockmode; + if (rctx->begin_req) { + rctx->begin_req =3D false; + v |=3D DESC_BEGIN; + } + + if (!ctx->nbytes && rctx->final_req) { + rctx->final_req =3D false; + v |=3D DESC_END; + } + + if (!ctx->nbytes || ctx->tloffset =3D=3D MAXDESC || rctx->partial_mapped) + v |=3D DESC_LAST; + + desc->t_src =3D cpu_to_le32(ctx->src_addr); + desc->t_dst =3D cpu_to_le32(ctx->state_addr); + desc->t_status =3D cpu_to_le32(v); + + return v & DESC_LAST; +} + +static int meson_kick_hardware(struct hasher_ctx *ctx) +{ + struct ahash_request *req =3D ahash_request_cast(ctx->areq); + struct crypto_ahash *tfm =3D crypto_ahash_reqtfm(req); + struct meson_hasher_req_ctx *rctx =3D ahash_request_ctx(req); + struct meson_hasher_tfm_ctx *tctx =3D crypto_ahash_ctx_dma(tfm); + struct meson_dev *mc =3D tctx->mc; + struct meson_flow *flow =3D &mc->chanlist[rctx->flow]; + int ret; + + reinit_completion(&flow->complete); + meson_dma_start(mc, rctx->flow); + + ret =3D wait_for_completion_timeout(&flow->complete, + msecs_to_jiffies(500)); + if (ret =3D=3D 0) { + dev_err(mc->dev, "DMA timeout for flow %d\n", rctx->flow); + return -EINVAL; + } else if (ret < 0) { + dev_err(mc->dev, "Waiting for DMA completion is failed (%d)\n", ret); + return ret; + } + + if (rctx->partial_mapped) { + dma_unmap_single(mc->dev, rctx->partial_addr, + rctx->partial_size, + DMA_TO_DEVICE); + rctx->partial_size =3D 0; + rctx->partial_mapped =3D false; + } + + ctx->tloffset =3D 0; + + return 0; +} + +static void meson_setup_state_descs(struct hasher_ctx *ctx) +{ + struct ahash_request *req =3D ahash_request_cast(ctx->areq); + struct crypto_ahash *tfm =3D crypto_ahash_reqtfm(req); + struct meson_hasher_req_ctx *rctx =3D ahash_request_ctx(req); + struct meson_hasher_tfm_ctx *tctx =3D crypto_ahash_ctx_dma(tfm); + struct meson_dev *mc =3D tctx->mc; + struct meson_desc *desc; + int i; + + if (ctx->tloffset || rctx->begin_req) + return; + + for (i =3D 0; i < mc->pdata->setup_desc_cnt; i++) { + int offset =3D i * 16; + + desc =3D &mc->chanlist[rctx->flow].tl[ctx->tloffset]; + desc->t_src =3D cpu_to_le32(ctx->state_addr + offset); + desc->t_dst =3D cpu_to_le32(offset); + desc->t_status =3D cpu_to_le32(MESON_SHA_BUFFER_SIZE | + DESC_MODE_KEY | DESC_OWN); + + ctx->tloffset++; + } +} + +static int meson_hasher_do_one_request(struct crypto_engine *engine, void = *areq) +{ + struct ahash_request *req =3D ahash_request_cast(areq); + struct crypto_ahash *tfm =3D crypto_ahash_reqtfm(req); + struct meson_hasher_tfm_ctx *tctx =3D crypto_ahash_ctx_dma(tfm); + struct meson_hasher_req_ctx *rctx =3D ahash_request_ctx(req); + struct meson_dev *mc =3D tctx->mc; + struct hasher_ctx ctx =3D { + .tloffset =3D 0, + .src_offset =3D 0, + .nbytes =3D rctx->final_req ? 0 : req->nbytes, + .src_sg =3D req->src, + .areq =3D areq, + }; + unsigned int blocksize =3D crypto_ahash_blocksize(tfm); + unsigned int digest_size =3D crypto_ahash_digestsize(tfm); + bool final_req =3D rctx->final_req; + int ret; + + ctx.state_addr =3D dma_map_single(mc->dev, rctx->state, + sizeof(rctx->state), DMA_BIDIRECTIONAL); + ret =3D dma_mapping_error(mc->dev, ctx.state_addr); + if (ret) { + dev_err(mc->dev, "Cannot DMA MAP SHA state buffer"); + goto fail_map_single; + } + + ret =3D meson_hasher_req_map(req); + if (ret) + goto fail_map_req; + + for (;;) { + unsigned int len =3D ctx.src_sg ? + min(sg_dma_len(ctx.src_sg) - ctx.src_offset, ctx.nbytes) : 0; + + ctx.src_addr =3D 0; + ctx.todo =3D 0; + + if (!rctx->final_req && !ctx.nbytes) + break; + + meson_setup_state_descs(&ctx); + + if (rctx->partial_size && rctx->partial_size < blocksize) { + ret =3D meson_fill_partial_buffer(&ctx, len); + if (ret) + goto fail; + } else if (len && len < blocksize) { + memcpy(rctx->partial, sg_virt(ctx.src_sg) + ctx.src_offset, len); + + rctx->partial_size =3D len; + ctx.nbytes -=3D len; + ctx.src_offset +=3D len; + } else if (len) { + ctx.src_addr =3D sg_dma_address(ctx.src_sg) + ctx.src_offset; + ctx.todo =3D min(rounddown(DESC_MAXLEN, blocksize), + rounddown(len, blocksize)); + ctx.nbytes -=3D ctx.todo; + ctx.src_offset +=3D ctx.todo; + } + + if (ctx.src_sg && ctx.src_offset =3D=3D sg_dma_len(ctx.src_sg)) { + ctx.src_offset =3D 0; + ctx.src_sg =3D sg_next(ctx.src_sg); + } + + if (!ctx.todo && ctx.nbytes) + continue; + + if (!ctx.todo && !rctx->final_req && !ctx.tloffset) + continue; + + if (meson_setup_data_descs(&ctx)) { + ret =3D meson_kick_hardware(&ctx); + if (ret) + goto fail; + } + } + +fail: + meson_hasher_req_unmap(req); + +fail_map_req: + dma_unmap_single(mc->dev, ctx.state_addr, sizeof(rctx->state), + DMA_BIDIRECTIONAL); + +fail_map_single: + if (final_req && ret =3D=3D 0) + memcpy(req->result, rctx->state, digest_size); + + local_bh_disable(); + crypto_finalize_hash_request(engine, req, ret); + local_bh_enable(); + + return ret; +} + +static int meson_hasher_export(struct ahash_request *req, void *out) +{ + struct meson_hasher_req_ctx *rctx =3D ahash_request_ctx(req); + + memcpy(out, rctx, sizeof(*rctx)); + return 0; +} + +static int meson_hasher_import(struct ahash_request *req, const void *in) +{ + struct meson_hasher_req_ctx *rctx =3D ahash_request_ctx(req); + + memcpy(rctx, in, sizeof(*rctx)); + return 0; +} + +static int meson_hasher_init(struct crypto_tfm *tfm) +{ + struct meson_hasher_tfm_ctx *tctx =3D crypto_tfm_ctx_dma(tfm); + struct crypto_ahash *atfm =3D __crypto_ahash_cast(tfm); + struct hash_alg_common *alg =3D crypto_hash_alg_common(atfm); + struct meson_alg_template *algt =3D container_of(alg, + struct meson_alg_template, alg.ahash.base.halg); + + crypto_ahash_set_reqsize(atfm, crypto_ahash_statesize(atfm)); + + memset(tctx, 0, sizeof(struct meson_hasher_tfm_ctx)); + + tctx->mc =3D algt->mc; + + return 0; +} + +static struct meson_alg_template mc_algs[] =3D { +{ + .type =3D CRYPTO_ALG_TYPE_AHASH, + .blockmode =3D DESC_MODE_SHA1, + .alg.ahash.base =3D { + .halg =3D { + .base =3D { + .cra_name =3D "sha1", + .cra_driver_name =3D "sha1-gxl", + .cra_priority =3D 400, + .cra_blocksize =3D SHA1_BLOCK_SIZE, + .cra_flags =3D CRYPTO_ALG_ASYNC, + .cra_ctxsize =3D sizeof(struct meson_hasher_tfm_ctx) + + CRYPTO_DMA_PADDING, + .cra_module =3D THIS_MODULE, + .cra_alignmask =3D 0, + .cra_init =3D meson_hasher_init, + }, + .digestsize =3D SHA1_DIGEST_SIZE, + .statesize =3D sizeof(struct meson_hasher_req_ctx), + }, + .init =3D meson_sha_init, + .update =3D meson_sha_update, + .final =3D meson_sha_final, + .digest =3D meson_sha_digest, + .export =3D meson_hasher_export, + .import =3D meson_hasher_import, + }, + .alg.ahash.op =3D { + .do_one_request =3D meson_hasher_do_one_request, + }, +}, +{ + .type =3D CRYPTO_ALG_TYPE_AHASH, + .blockmode =3D DESC_MODE_SHA224, + .alg.ahash.base =3D { + .halg =3D { + .base =3D { + .cra_name =3D "sha224", + .cra_driver_name =3D "sha224-gxl", + .cra_priority =3D 400, + .cra_blocksize =3D SHA224_BLOCK_SIZE, + .cra_flags =3D CRYPTO_ALG_ASYNC, + .cra_ctxsize =3D sizeof(struct meson_hasher_tfm_ctx) + + CRYPTO_DMA_PADDING, + .cra_module =3D THIS_MODULE, + .cra_alignmask =3D 0, + .cra_init =3D meson_hasher_init, + }, + .digestsize =3D SHA224_DIGEST_SIZE, + .statesize =3D sizeof(struct meson_hasher_req_ctx), + }, + .init =3D meson_sha_init, + .update =3D meson_sha_update, + .final =3D meson_sha_final, + .digest =3D meson_sha_digest, + .export =3D meson_hasher_export, + .import =3D meson_hasher_import, + }, + .alg.ahash.op =3D { + .do_one_request =3D meson_hasher_do_one_request, + }, +}, +{ + .type =3D CRYPTO_ALG_TYPE_AHASH, + .blockmode =3D DESC_MODE_SHA256, + .alg.ahash.base =3D { + .halg =3D { + .base =3D { + .cra_name =3D "sha256", + .cra_driver_name =3D "sha256-gxl", + .cra_priority =3D 400, + .cra_blocksize =3D SHA256_BLOCK_SIZE, + .cra_flags =3D CRYPTO_ALG_ASYNC, + .cra_ctxsize =3D sizeof(struct meson_hasher_tfm_ctx) + + CRYPTO_DMA_PADDING, + .cra_module =3D THIS_MODULE, + .cra_alignmask =3D 0, + .cra_init =3D meson_hasher_init, + }, + .digestsize =3D SHA256_DIGEST_SIZE, + .statesize =3D sizeof(struct meson_hasher_req_ctx), + }, + .init =3D meson_sha_init, + .update =3D meson_sha_update, + .final =3D meson_sha_final, + .digest =3D meson_sha_digest, + .export =3D meson_hasher_export, + .import =3D meson_hasher_import, + }, + .alg.ahash.op =3D { + .do_one_request =3D meson_hasher_do_one_request, + }, +}, +}; + +int meson_hasher_register(struct meson_dev *mc) +{ + if (!mc->pdata->hasher_supported) { + pr_info("amlogic-gxl-hasher: hasher not supported at current platform"); + return 0; + } + + return meson_register_algs(mc, mc_algs, ARRAY_SIZE(mc_algs)); +} + +void meson_hasher_unregister(struct meson_dev *mc) +{ + if (!mc->pdata->hasher_supported) + return; + + meson_unregister_algs(mc, mc_algs, ARRAY_SIZE(mc_algs)); +} diff --git a/drivers/crypto/amlogic/amlogic-gxl.h b/drivers/crypto/amlogic/= amlogic-gxl.h index e2e843ea6c4c..3380583813a8 100644 --- a/drivers/crypto/amlogic/amlogic-gxl.h +++ b/drivers/crypto/amlogic/amlogic-gxl.h @@ -5,6 +5,7 @@ * Copyright (C) 2018-2019 Corentin LABBE */ #include +#include #include #include #include @@ -23,13 +24,22 @@ =20 #define DESC_OPMODE_ECB (0 << 26) #define DESC_OPMODE_CBC (1 << 26) +#define DESC_OPMODE_SHA (0 << 26) =20 #define DESC_MAXLEN GENMASK(16, 0) =20 +#define DESC_MODE_SHA1 (0x5 << 20) +#define DESC_MODE_SHA224 (0x7 << 20) +#define DESC_MODE_SHA256 (0x6 << 20) + #define DESC_LAST BIT(18) +#define DESC_BEGIN BIT(24) +#define DESC_END BIT(25) #define DESC_ENCRYPTION BIT(28) #define DESC_OWN BIT(31) =20 +#define MESON_SHA_BUFFER_SIZE (SHA256_DIGEST_SIZE + 16) + /* * struct meson_desc - Descriptor for DMA operations * Note that without datasheet, some are unknown @@ -83,11 +93,13 @@ struct meson_flow { * @reg_descs: offset to descriptors register * @reg_status: offset to status register * @setup_desc_cnt: number of setup descriptor to configure. + * @hasher_supported: indecates whether hasher is supported. */ struct meson_pdata { u32 descs_reg; u32 status_reg; u32 setup_desc_cnt; + bool hasher_supported; }; =20 /* @@ -141,6 +153,38 @@ struct meson_cipher_tfm_ctx { struct crypto_skcipher *fallback_tfm; }; =20 +/* + * struct meson_hasher_req_ctx - context for a hasher request + * @state: state data + * @partial: partial buffer data. Contains sent data which + * size < blocksize + * @partial_size: size of the partial buffer + * @partial_addr: physical address of partial buffer + * @partial_mapped: indicates is partial buffer currently mapped or not + * @flags: request flags (for example, is this final req or not) + * @flow: the flow to use for this request + */ +struct meson_hasher_req_ctx { + u8 state[SHA256_DIGEST_SIZE + 16] ____cacheline_aligned; + u8 partial[SHA256_BLOCK_SIZE] ____cacheline_aligned; + unsigned int partial_size ____cacheline_aligned; + dma_addr_t partial_addr; + bool partial_mapped; + + bool begin_req; + bool final_req; + int flow; +}; + +/* + * struct meson_hasher_tfm_ctx - context for a hasher TFM + * @enginectx: crypto_engine used by this TFM + * @mc: pointer to the private data of driver handling this TFM + */ +struct meson_hasher_tfm_ctx { + struct meson_dev *mc; +}; + /* * struct meson_alg_template - crypto_alg template * @type: the CRYPTO_ALG_TYPE for this template @@ -155,6 +199,7 @@ struct meson_alg_template { u32 blockmode; union { struct skcipher_engine_alg skcipher; + struct ahash_engine_alg ahash; } alg; struct meson_dev *mc; #ifdef CONFIG_CRYPTO_DEV_AMLOGIC_GXL_DEBUG @@ -176,3 +221,6 @@ int meson_cipher_register(struct meson_dev *mc); void meson_cipher_unregister(struct meson_dev *mc); void meson_cipher_debugfs_show(struct seq_file *seq, void *v); int meson_handle_cipher_request(struct crypto_engine *engine, void *areq); + +int meson_hasher_register(struct meson_dev *mc); +void meson_hasher_unregister(struct meson_dev *mc); --=20 2.34.1