From nobody Tue Feb 10 01:15:07 2026 Received: from mail-pf1-f174.google.com (mail-pf1-f174.google.com [209.85.210.174]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 4A98718A959 for ; Mon, 2 Jun 2025 05:35:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=209.85.210.174 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1748842532; cv=none; b=ZCYidAd/t2Pc2tUnxypIo1B13/n7gouINPjyUUHdTJBsCwHYnVw+GImKJa1QSTzHGVencu/7sBobILsLRN77kwOQ4gScYAos/czlEa9ZZqyZdN0RWVi3/wtuysAXQxCUKKoQ7am/gCLv/gSUi1eAjn/X2yCg2DDA7fR4kW65azo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1748842532; c=relaxed/simple; bh=M5L7PxbFjk8apeIrvn7XhHQ5RjaP01fqSQSuQ1X/Xfg=; h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References: MIME-Version; b=K+eYGXO+j6tptpz0hhoN73zF0XyJ0f42Cgpvm/QZLLsRdIkAsdalR7cD6haO8TK4sEMMM6aSSEzwfyTGsB+o2aYw/rYegykVGp7q5Gxbi7xrmjasUo8T1BxPeuXrNKlXSVgg7pYsM+6havI4Nl/ohot3RU7VKn53Deo60rFm+ck= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=vayavyalabs.com; spf=pass smtp.mailfrom=vayavyalabs.com; dkim=pass (1024-bit key) header.d=vayavyalabs.com header.i=@vayavyalabs.com header.b=Hv7laf0O; arc=none smtp.client-ip=209.85.210.174 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=vayavyalabs.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=vayavyalabs.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=vayavyalabs.com header.i=@vayavyalabs.com header.b="Hv7laf0O" Received: by mail-pf1-f174.google.com with SMTP id d2e1a72fcca58-742c5f3456fso3023254b3a.0 for ; Sun, 01 Jun 2025 22:35:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=vayavyalabs.com; s=google; t=1748842528; x=1749447328; darn=vger.kernel.org; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=QKT3coMp3iq3r328tvIyH39wq4e7n3wmCRAsCURIsok=; b=Hv7laf0OfbZ1YkchsHVLr0wMBvz6EbPR4p61V3TI2fww0vHITMAEtbHUnuZAmwAIvh MB6A7ZiyGyJ8oPArWZsiKJWonDOEq596EvSOi7QGDYi7WaAIl4BBEQJJkhu2VzLYB4xd 1qeQVOxWyCS/up2a4l9Enwd5Yi8YiJKfo2mbc= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1748842528; x=1749447328; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=QKT3coMp3iq3r328tvIyH39wq4e7n3wmCRAsCURIsok=; b=qLNa9G4BKmYnEoJwASlSq2+qxsBsrCsIBj79Hf/aOZIOe6PXIYhmEOQSjJVUo7J9mA S4WnpQ3LCIxupT5V2EdcZaoO9gV8W9Vn3aE8+oqCLjWLfL2+FMeQeIB9CUpQoI57e7D9 4n0nwA8ISHOZYodfOkuY5T59xNBJSEyzLyo1gq93zx16212UmPMV/qbl/Nc5q1TKPQD+ tSlLLX/1aU60P8PxIeWgx/YUMmZLupkHmSdrRPNRwAAZ5XbB8ASNkiXQOYcVCHr8AIP+ Q8Zz5w7rk7nbnhN/nLpLurU/ypmvk6+e8UpMc+BDSxzNoa+HghA0L2BjXBFksXpuK4A3 3VQg== X-Forwarded-Encrypted: i=1; AJvYcCV7e09b3RQFBFs/yvlaTd91KKkyvsquZEp2AfVuoFLBLXp73OjNv0yO7DXGtWELVPMd9VK3RsnIzXcMpbE=@vger.kernel.org X-Gm-Message-State: AOJu0YxOvIvxTc8aQbIr331PWq1xpVC9H7IgvOMmst0eA311Ipg3GfPK SkcB1guF7NLa/2tBCXMtAuY/6D4WxLhWmjH9HM2DXhx9szOI7Vpe6CxQZ7ducpOKKTo= X-Gm-Gg: ASbGncsTA68SFtjZcKV5dYumr3N6/NoEUPiTdISV5rW3fJvXPzifC1qjPMrWsZWA53s s1UBueWOCFduOrrbiivimJ8HL8ijTP4oWOprAgbDdOE+xhzFV7RZm2+0/aqDOcykWefAIO9saSo ZRgSq0NnRtJEajifi68QgE+HpGm8t0K1ckRiPCVxApLk/PrAGpy/lZ1ytsomIr7C9H55L/X8P7F aHV7Xnj9ouWLGPpHXhSh4x82f65UYH8JxP1HX8VO2cv0SR9LZeYGb2a2gabK83RsdOOcAPD04Us PViuOLXHIxMtUTm5N7uPvc/RU/EY8BMOjg0OWlWEVD04ZFQazTiD0zpsYnORRwndTOCs8aNDOTH cPoQ= X-Google-Smtp-Source: AGHT+IGYla/JYqwkRGpaXPdImmxh1e5x7wjAhxAdJnm7H0h8RTFp5MaQHZDJCv21TmgG5J2XJbLZGA== X-Received: by 2002:a17:90b:4cc6:b0:311:df4b:4b84 with SMTP id 98e67ed59e1d1-3127c861582mr10123225a91.35.1748842528278; Sun, 01 Jun 2025 22:35:28 -0700 (PDT) Received: from localhost.localdomain ([117.251.222.160]) by smtp.gmail.com with ESMTPSA id 98e67ed59e1d1-3124e30cbe6sm4836986a91.39.2025.06.01.22.35.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Sun, 01 Jun 2025 22:35:27 -0700 (PDT) From: Pavitrakumar Managutte To: linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org, devicetree@vger.kernel.org, herbert@gondor.apana.org.au, robh@kernel.org Cc: krzk+dt@kernel.org, conor+dt@kernel.org, Ruud.Derwig@synopsys.com, manjunath.hadli@vayavyalabs.com, adityak@vayavyalabs.com, Pavitrakumar Managutte , Bhoomika Kadabi Subject: [PATCH v3 4/6] Add SPAcc ahash support Date: Mon, 2 Jun 2025 11:02:29 +0530 Message-Id: <20250602053231.403143-5-pavitrakumarm@vayavyalabs.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20250602053231.403143-1-pavitrakumarm@vayavyalabs.com> References: <20250602053231.403143-1-pavitrakumarm@vayavyalabs.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable Content-Type: text/plain; charset="utf-8" Add ahash support to SPAcc driver. Below are the hash algos supported: - cmac(aes) - xcbc(aes) - cmac(sm4) - xcbc(sm4) - hmac(md5) - md5 - hmac(sha1) - sha1 - sha224 - sha256 - sha384 - sha512 - hmac(sha224) - hmac(sha256) - hmac(sha384) - hmac(sha512) - sha3-224 - sha3-256 - sha3-384 - sha3-512 - hmac(sm3) - sm3 - michael_mic Co-developed-by: Bhoomika Kadabi Signed-off-by: Bhoomika Kadabi Signed-off-by: Pavitrakumar Managutte Signed-off-by: Manjunath Hadli Acked-by: Ruud Derwig --- drivers/crypto/dwc-spacc/spacc_ahash.c | 969 +++++++++++++++++++++++++ 1 file changed, 969 insertions(+) create mode 100644 drivers/crypto/dwc-spacc/spacc_ahash.c diff --git a/drivers/crypto/dwc-spacc/spacc_ahash.c b/drivers/crypto/dwc-sp= acc/spacc_ahash.c new file mode 100644 index 000000000000..cffc747ed332 --- /dev/null +++ b/drivers/crypto/dwc-spacc/spacc_ahash.c @@ -0,0 +1,969 @@ +// SPDX-License-Identifier: GPL-2.0 + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "spacc_core.h" +#include "spacc_device.h" + +#define PPP_BUF_SIZE 128 + +struct sdesc { + struct shash_desc shash; + char ctx[]; +}; + +static struct dma_pool *spacc_hash_pool; +static LIST_HEAD(spacc_hash_alg_list); +static DEFINE_MUTEX(spacc_hash_alg_mutex); + +static struct mode_tab possible_hashes[] =3D { + { .keylen[0] =3D 16, MODE_TAB_HASH("cmac(aes)", MAC_CMAC, 16, 16), + .sw_fb =3D true }, + { .keylen[0] =3D 48 | MODE_TAB_HASH_XCBC, MODE_TAB_HASH("xcbc(aes)", + MAC_XCBC, 16, 16), .sw_fb =3D true }, + + { MODE_TAB_HASH("cmac(sm4)", MAC_SM4_CMAC, 16, 16), .sw_fb =3D true }, + { .keylen[0] =3D 32 | MODE_TAB_HASH_XCBC, MODE_TAB_HASH("xcbc(sm4)", + MAC_SM4_XCBC, 16, 16), .sw_fb =3D true }, + + { MODE_TAB_HASH("hmac(md5)", HMAC_MD5, MD5_DIGEST_SIZE, + MD5_HMAC_BLOCK_SIZE), .sw_fb =3D true }, + { MODE_TAB_HASH("md5", HASH_MD5, MD5_DIGEST_SIZE, + MD5_HMAC_BLOCK_SIZE), .sw_fb =3D true }, + + { MODE_TAB_HASH("hmac(sha1)", HMAC_SHA1, SHA1_DIGEST_SIZE, + SHA1_BLOCK_SIZE), .sw_fb =3D true }, + { MODE_TAB_HASH("sha1", HASH_SHA1, SHA1_DIGEST_SIZE, + SHA1_BLOCK_SIZE), .sw_fb =3D true }, + + { MODE_TAB_HASH("sha224", HASH_SHA224, SHA224_DIGEST_SIZE, + SHA224_BLOCK_SIZE), .sw_fb =3D true }, + { MODE_TAB_HASH("sha256", HASH_SHA256, SHA256_DIGEST_SIZE, + SHA256_BLOCK_SIZE), .sw_fb =3D true }, + { MODE_TAB_HASH("sha384", HASH_SHA384, SHA384_DIGEST_SIZE, + SHA384_BLOCK_SIZE), .sw_fb =3D true }, + { MODE_TAB_HASH("sha512", HASH_SHA512, SHA512_DIGEST_SIZE, + SHA512_BLOCK_SIZE), .sw_fb =3D true }, + + { MODE_TAB_HASH("hmac(sha512)", HMAC_SHA512, SHA512_DIGEST_SIZE, + SHA512_BLOCK_SIZE), .sw_fb =3D true }, + { MODE_TAB_HASH("hmac(sha224)", HMAC_SHA224, SHA224_DIGEST_SIZE, + SHA224_BLOCK_SIZE), .sw_fb =3D true }, + { MODE_TAB_HASH("hmac(sha256)", HMAC_SHA256, SHA256_DIGEST_SIZE, + SHA256_BLOCK_SIZE), .sw_fb =3D true }, + { MODE_TAB_HASH("hmac(sha384)", HMAC_SHA384, SHA384_DIGEST_SIZE, + SHA384_BLOCK_SIZE), .sw_fb =3D true }, + + { MODE_TAB_HASH("sha3-224", HASH_SHA3_224, SHA3_224_DIGEST_SIZE, + SHA3_224_BLOCK_SIZE), .sw_fb =3D true }, + { MODE_TAB_HASH("sha3-256", HASH_SHA3_256, SHA3_256_DIGEST_SIZE, + SHA3_256_BLOCK_SIZE), .sw_fb =3D true }, + { MODE_TAB_HASH("sha3-384", HASH_SHA3_384, SHA3_384_DIGEST_SIZE, + SHA3_384_BLOCK_SIZE), .sw_fb =3D true }, + { MODE_TAB_HASH("sha3-512", HASH_SHA3_512, SHA3_512_DIGEST_SIZE, + SHA3_512_BLOCK_SIZE), .sw_fb =3D true }, + + { MODE_TAB_HASH("hmac(sm3)", HMAC_SM3, SM3_DIGEST_SIZE, + SM3_BLOCK_SIZE), .sw_fb =3D true }, + { MODE_TAB_HASH("sm3", HASH_SM3, SM3_DIGEST_SIZE, + SM3_BLOCK_SIZE), .sw_fb =3D true }, + { MODE_TAB_HASH("michael_mic", MAC_MICHAEL, 8, 8), .sw_fb =3D true }, + +}; + +static void spacc_hash_cleanup_dma_dst(struct spacc_crypto_ctx *tctx, + struct ahash_request *req) +{ + struct spacc_crypto_reqctx *ctx =3D ahash_request_ctx(req); + + pdu_ddt_free(&ctx->dst); +} + +static void spacc_hash_cleanup_dma_src(struct spacc_crypto_ctx *tctx, + struct ahash_request *req) +{ + struct spacc_crypto_reqctx *ctx =3D ahash_request_ctx(req); + + if (tctx->tmp_sgl && tctx->tmp_sgl[0].length !=3D 0) { + dma_unmap_sg(tctx->dev, tctx->tmp_sgl, ctx->src_nents, + DMA_TO_DEVICE); + kfree(tctx->tmp_sgl_buff); + tctx->tmp_sgl_buff =3D NULL; + tctx->tmp_sgl[0].length =3D 0; + } else { + dma_unmap_sg(tctx->dev, req->src, ctx->src_nents, + DMA_TO_DEVICE); + } + + pdu_ddt_free(&ctx->src); +} + +static void spacc_hash_cleanup_dma(struct device *dev, + struct ahash_request *req) +{ + struct spacc_crypto_reqctx *ctx =3D ahash_request_ctx(req); + + dma_unmap_sg(dev, req->src, ctx->src_nents, DMA_TO_DEVICE); + pdu_ddt_free(&ctx->src); + + dma_pool_free(spacc_hash_pool, ctx->digest_buf, ctx->digest_dma); + pdu_ddt_free(&ctx->dst); +} + +static void spacc_init_calg(struct crypto_alg *calg, + const struct mode_tab *mode) +{ + strscpy(calg->cra_name, mode->name); + calg->cra_name[sizeof(mode->name) - 1] =3D '\0'; + + strscpy(calg->cra_driver_name, "spacc-"); + strcat(calg->cra_driver_name, mode->name); + calg->cra_driver_name[sizeof(calg->cra_driver_name) - 1] =3D '\0'; + + calg->cra_blocksize =3D mode->blocklen; +} + +static int spacc_ctx_clone_handle(struct ahash_request *req) +{ + struct crypto_ahash *tfm =3D crypto_ahash_reqtfm(req); + struct spacc_crypto_ctx *tctx =3D crypto_ahash_ctx(tfm); + struct spacc_crypto_reqctx *ctx =3D ahash_request_ctx(req); + struct spacc_priv *priv =3D dev_get_drvdata(tctx->dev); + + if (tctx->handle < 0) + return -EINVAL; + + ctx->acb.new_handle =3D spacc_clone_handle(&priv->spacc, tctx->handle, + &ctx->acb); + + if (ctx->acb.new_handle < 0) { + spacc_hash_cleanup_dma(tctx->dev, req); + return -ENOMEM; + } + + ctx->acb.tctx =3D tctx; + ctx->acb.ctx =3D ctx; + ctx->acb.req =3D req; + ctx->acb.spacc =3D &priv->spacc; + + return 0; +} + +static int spacc_hash_init_dma(struct device *dev, struct ahash_request *r= eq) +{ + int rc =3D -1; + struct crypto_ahash *tfm =3D crypto_ahash_reqtfm(req); + struct spacc_crypto_ctx *tctx =3D crypto_ahash_ctx(tfm); + struct spacc_crypto_reqctx *ctx =3D ahash_request_ctx(req); + + gfp_t mflags =3D GFP_ATOMIC; + + if (req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP) + mflags =3D GFP_KERNEL; + + ctx->digest_buf =3D dma_pool_alloc(spacc_hash_pool, mflags, + &ctx->digest_dma); + + if (!ctx->digest_buf) + return -ENOMEM; + + rc =3D pdu_ddt_init(dev, &ctx->dst, 1 | 0x80000000); + if (rc < 0) { + dev_err(dev, "ERR: PDU DDT init error\n"); + rc =3D -EIO; + goto err_free_digest; + } + + pdu_ddt_add(dev, &ctx->dst, ctx->digest_dma, SPACC_MAX_DIGEST_SIZE); + + if (ctx->total_nents > 0 && ctx->single_shot) { + /* single shot */ + spacc_ctx_clone_handle(req); + + if (req->nbytes) { + rc =3D spacc_sg_to_ddt(dev, req->src, req->nbytes, + &ctx->src, DMA_TO_DEVICE); + } else { + memset(tctx->tmp_buffer, '\0', PPP_BUF_SIZE); + sg_set_buf(&tctx->tmp_sgl[0], tctx->tmp_buffer, + PPP_BUF_SIZE); + rc =3D spacc_sg_to_ddt(dev, &tctx->tmp_sgl[0], + tctx->tmp_sgl[0].length, + &ctx->src, DMA_TO_DEVICE); + } + } else if (ctx->total_nents =3D=3D 0 && req->nbytes =3D=3D 0) { + spacc_ctx_clone_handle(req); + + /* zero length case */ + memset(tctx->tmp_buffer, '\0', PPP_BUF_SIZE); + sg_set_buf(&tctx->tmp_sgl[0], tctx->tmp_buffer, PPP_BUF_SIZE); + rc =3D spacc_sg_to_ddt(dev, &tctx->tmp_sgl[0], + tctx->tmp_sgl[0].length, + &ctx->src, DMA_TO_DEVICE); + } + + if (rc < 0) + goto err_free_dst; + + ctx->src_nents =3D rc; + + return rc; + +err_free_dst: + pdu_ddt_free(&ctx->dst); +err_free_digest: + dma_pool_free(spacc_hash_pool, ctx->digest_buf, ctx->digest_dma); + + return rc; +} + +static void spacc_free_mems(struct spacc_crypto_reqctx *ctx, + struct spacc_crypto_ctx *tctx, + struct ahash_request *req) +{ + spacc_hash_cleanup_dma_dst(tctx, req); + spacc_hash_cleanup_dma_src(tctx, req); + + if (ctx->single_shot) { + kfree(tctx->tmp_sgl); + tctx->tmp_sgl =3D NULL; + + ctx->single_shot =3D 0; + if (ctx->total_nents) + ctx->total_nents =3D 0; + } +} + +static void spacc_digest_cb(void *spacc, void *tfm) +{ + int dig_sz; + int err =3D -1; + struct ahash_cb_data *cb =3D tfm; + struct spacc_device *device =3D (struct spacc_device *)spacc; + + dig_sz =3D crypto_ahash_digestsize(crypto_ahash_reqtfm(cb->req)); + + if (cb->ctx->single_shot) + memcpy(cb->req->result, cb->ctx->digest_buf, dig_sz); + else + memcpy(cb->tctx->digest_ctx_buf, cb->ctx->digest_buf, dig_sz); + + err =3D cb->spacc->job[cb->new_handle].job_err; + + dma_pool_free(spacc_hash_pool, cb->ctx->digest_buf, + cb->ctx->digest_dma); + spacc_free_mems(cb->ctx, cb->tctx, cb->req); + spacc_close(cb->spacc, cb->new_handle); + + if (cb->req->base.complete) { + local_bh_disable(); + ahash_request_complete(cb->req, err); + local_bh_enable(); + } + + if (atomic_read(&device->wait_counter) > 0) { + struct spacc_completion *cur_pos, *next_pos; + + /* wake up waitqueue to obtain a context */ + atomic_dec(&device->wait_counter); + if (atomic_read(&device->wait_counter) > 0) { + mutex_lock(&device->spacc_waitq_mutex); + list_for_each_entry_safe(cur_pos, next_pos, + &device->spacc_wait_list, + list) { + if (cur_pos && cur_pos->wait_done =3D=3D 1) { + cur_pos->wait_done =3D 0; + complete(&cur_pos->spacc_wait_complete); + list_del(&cur_pos->list); + break; + } + } + mutex_unlock(&device->spacc_waitq_mutex); + } + } +} + +static int do_shash(struct device *dev, unsigned char *name, + unsigned char *result, const u8 *data1, + unsigned int data1_len, const u8 *data2, + unsigned int data2_len, const u8 *key, + unsigned int key_len) +{ + int rc =3D 0; + unsigned int size; + struct sdesc *sdesc; + struct crypto_shash *hash; + + hash =3D crypto_alloc_shash(name, 0, 0); + if (IS_ERR(hash)) { + rc =3D PTR_ERR(hash); + dev_err(dev, "ERR: Crypto %s allocation error %d\n", name, rc); + return rc; + } + + size =3D sizeof(struct shash_desc) + crypto_shash_descsize(hash); + sdesc =3D kmalloc(size, GFP_KERNEL); + if (!sdesc) { + rc =3D -ENOMEM; + goto do_shash_err; + } + sdesc->shash.tfm =3D hash; + + if (key_len > 0) { + rc =3D crypto_shash_setkey(hash, key, key_len); + if (rc) { + dev_err(dev, "ERR: Could not setkey %s shash\n", name); + goto do_shash_err; + } + } + + rc =3D crypto_shash_init(&sdesc->shash); + if (rc) { + dev_err(dev, "ERR: Could not init %s shash\n", name); + goto do_shash_err; + } + + rc =3D crypto_shash_update(&sdesc->shash, data1, data1_len); + if (rc) { + dev_err(dev, "ERR: Could not update1\n"); + goto do_shash_err; + } + + if (data2 && data2_len) { + rc =3D crypto_shash_update(&sdesc->shash, data2, data2_len); + if (rc) { + dev_err(dev, "ERR: Could not update2\n"); + goto do_shash_err; + } + } + + rc =3D crypto_shash_final(&sdesc->shash, result); + if (rc) + dev_err(dev, "ERR: Could not generate %s hash\n", name); + +do_shash_err: + crypto_free_shash(hash); + kfree(sdesc); + + return rc; +} + +static int spacc_hash_setkey(struct crypto_ahash *tfm, const u8 *key, + unsigned int keylen) +{ + int rc =3D 0; + int ret =3D 0; + unsigned int block_size; + unsigned int digest_size; + char hash_alg[CRYPTO_MAX_ALG_NAME]; + const struct spacc_alg *salg =3D spacc_tfm_ahash(&tfm->base); + struct spacc_crypto_ctx *tctx =3D crypto_ahash_ctx(tfm); + struct spacc_priv *priv =3D dev_get_drvdata(tctx->dev); + + block_size =3D crypto_tfm_alg_blocksize(&tfm->base); + digest_size =3D crypto_ahash_digestsize(tfm); + + /* + * We will not use the hardware in case of HMACs + * This was meant for hashes but it works for cmac/xcbc since we + * only intend to support 128-bit keys... + */ + if (keylen > block_size && salg->mode->id !=3D CRYPTO_MODE_MAC_CMAC) { + dev_dbg(salg->dev, "Exceeds keylen: %u\n", keylen); + dev_dbg(salg->dev, "Req. keylen hashing %s\n", + salg->calg->cra_name); + + memset(hash_alg, 0x00, CRYPTO_MAX_ALG_NAME); + switch (salg->mode->id) { + case CRYPTO_MODE_HMAC_SHA224: + rc =3D do_shash(salg->dev, "sha224", tctx->ipad, key, + keylen, NULL, 0, NULL, 0); + break; + + case CRYPTO_MODE_HMAC_SHA256: + rc =3D do_shash(salg->dev, "sha256", tctx->ipad, key, + keylen, NULL, 0, NULL, 0); + break; + + case CRYPTO_MODE_HMAC_SHA384: + rc =3D do_shash(salg->dev, "sha384", tctx->ipad, key, + keylen, NULL, 0, NULL, 0); + break; + + case CRYPTO_MODE_HMAC_SHA512: + rc =3D do_shash(salg->dev, "sha512", tctx->ipad, key, + keylen, NULL, 0, NULL, 0); + break; + + case CRYPTO_MODE_HMAC_MD5: + rc =3D do_shash(salg->dev, "md5", tctx->ipad, key, + keylen, NULL, 0, NULL, 0); + break; + + case CRYPTO_MODE_HMAC_SHA1: + rc =3D do_shash(salg->dev, "sha1", tctx->ipad, key, + keylen, NULL, 0, NULL, 0); + break; + case CRYPTO_MODE_HMAC_SM3: + rc =3D do_shash(salg->dev, "sm3", tctx->ipad, key, + keylen, NULL, 0, NULL, 0); + break; + + default: + return -EINVAL; + } + + if (rc < 0) { + dev_err(salg->dev, "ERR: %d computing shash for %s\n", + rc, hash_alg); + return -EIO; + } + + keylen =3D digest_size; + dev_dbg(salg->dev, "updated keylen: %u\n", keylen); + + tctx->ctx_valid =3D false; + + if (salg->mode->sw_fb) { + rc =3D crypto_ahash_setkey(tctx->fb.hash, + tctx->ipad, keylen); + if (rc < 0) + return rc; + } + } else { + memcpy(tctx->ipad, key, keylen); + tctx->ctx_valid =3D false; + + if (salg->mode->sw_fb) { + rc =3D crypto_ahash_setkey(tctx->fb.hash, key, keylen); + if (rc < 0) + return rc; + } + } + + /* close handle since key size may have changed */ + if (tctx->handle >=3D 0) { + spacc_close(&priv->spacc, tctx->handle); + put_device(tctx->dev); + tctx->handle =3D -1; + tctx->dev =3D NULL; + } + + /* reset priv */ + priv =3D NULL; + priv =3D dev_get_drvdata(salg->dev); + tctx->dev =3D get_device(salg->dev); + ret =3D spacc_is_mode_keysize_supported(&priv->spacc, salg->mode->id, + keylen, 1); + if (ret) { + /* Grab the spacc context if no one is waiting */ + tctx->handle =3D spacc_open(&priv->spacc, + CRYPTO_MODE_NULL, + salg->mode->id, -1, + 0, spacc_digest_cb, tfm); + if (tctx->handle < 0) { + dev_err(salg->dev, "ERR: Failed to open SPAcc context\n"); + put_device(salg->dev); + return -EIO; + } + + } else { + dev_err(salg->dev, "Keylen: %d not enabled for algo: %d", + keylen, salg->mode->id); + } + + rc =3D spacc_set_operation(&priv->spacc, tctx->handle, OP_ENCRYPT, + ICV_HASH, IP_ICV_OFFSET, 0, 0, 0); + if (rc < 0) { + spacc_close(&priv->spacc, tctx->handle); + tctx->handle =3D -1; + put_device(tctx->dev); + return -EIO; + } + + if (salg->mode->id =3D=3D CRYPTO_MODE_MAC_XCBC || + salg->mode->id =3D=3D CRYPTO_MODE_MAC_SM4_XCBC) { + rc =3D spacc_compute_xcbc_key(&priv->spacc, salg->mode->id, + tctx->handle, tctx->ipad, + keylen, tctx->ipad); + if (rc < 0) { + dev_err(tctx->dev, + "Failed to compute XCBC key: %d\n", rc); + return -EIO; + } + rc =3D spacc_write_context(&priv->spacc, tctx->handle, + SPACC_HASH_OPERATION, tctx->ipad, + 32 + keylen, NULL, 0); + } else { + rc =3D spacc_write_context(&priv->spacc, tctx->handle, + SPACC_HASH_OPERATION, tctx->ipad, + keylen, NULL, 0); + } + + memset(tctx->ipad, 0, sizeof(tctx->ipad)); + if (rc < 0) { + dev_err(tctx->dev, "ERR: Failed to write SPAcc context\n"); + /* Non-fatal, we continue with the software fallback */ + return 0; + } + + tctx->ctx_valid =3D true; + + return 0; +} + +static int spacc_set_statesize(struct spacc_alg *salg) +{ + unsigned int statesize =3D 0; + + switch (salg->mode->id) { + case CRYPTO_MODE_HMAC_SHA1: + case CRYPTO_MODE_HASH_SHA1: + statesize =3D sizeof(struct sha1_state); + break; + case CRYPTO_MODE_MAC_CMAC: + case CRYPTO_MODE_MAC_XCBC: + statesize =3D sizeof(struct crypto_aes_ctx); + break; + case CRYPTO_MODE_MAC_SM4_CMAC: + case CRYPTO_MODE_MAC_SM4_XCBC: + statesize =3D sizeof(struct sm4_ctx); + break; + case CRYPTO_MODE_HMAC_MD5: + case CRYPTO_MODE_HASH_MD5: + statesize =3D sizeof(struct md5_state); + break; + case CRYPTO_MODE_HASH_SHA224: + case CRYPTO_MODE_HASH_SHA256: + case CRYPTO_MODE_HMAC_SHA224: + case CRYPTO_MODE_HMAC_SHA256: + statesize =3D sizeof(struct sha256_state); + break; + case CRYPTO_MODE_HMAC_SHA512: + case CRYPTO_MODE_HASH_SHA512: + statesize =3D sizeof(struct sha512_state); + break; + case CRYPTO_MODE_HMAC_SHA384: + case CRYPTO_MODE_HASH_SHA384: + statesize =3D sizeof(struct spacc_crypto_reqctx); + break; + case CRYPTO_MODE_HASH_SHA3_224: + case CRYPTO_MODE_HASH_SHA3_256: + case CRYPTO_MODE_HASH_SHA3_384: + case CRYPTO_MODE_HASH_SHA3_512: + statesize =3D sizeof(struct sha3_state); + break; + case CRYPTO_MODE_HMAC_SM3: + case CRYPTO_MODE_MAC_MICHAEL: + statesize =3D sizeof(struct spacc_crypto_reqctx); + break; + default: + break; + } + + return statesize; +} + +static int spacc_hash_init_tfm(struct crypto_ahash *tfm) +{ + struct spacc_priv *priv =3D NULL; + const struct spacc_alg *salg =3D container_of(crypto_ahash_alg(tfm), + struct spacc_alg, alg.hash); + struct spacc_crypto_ctx *tctx =3D crypto_ahash_ctx(tfm); + + tctx->handle =3D -1; + tctx->ctx_valid =3D false; + tctx->dev =3D get_device(salg->dev); + priv =3D dev_get_drvdata(tctx->dev); + + tctx->fb.hash =3D crypto_alloc_ahash(crypto_ahash_alg_name(tfm), 0, + CRYPTO_ALG_NEED_FALLBACK); + if (IS_ERR(tctx->fb.hash)) { + spacc_close(&priv->spacc, tctx->handle); + put_device(tctx->dev); + return PTR_ERR(tctx->fb.hash); + } + + crypto_ahash_set_statesize(tfm, + crypto_ahash_statesize(tctx->fb.hash)); + + crypto_ahash_set_reqsize(tfm, + sizeof(struct spacc_crypto_reqctx) + + crypto_ahash_reqsize(tctx->fb.hash)); + + return 0; +} + +static void spacc_hash_exit_tfm(struct crypto_ahash *tfm) +{ + struct spacc_crypto_ctx *tctx =3D crypto_ahash_ctx(tfm); + struct spacc_priv *priv =3D dev_get_drvdata(tctx->dev); + + crypto_free_ahash(tctx->fb.hash); + tctx->fb.hash =3D NULL; + if (tctx->handle >=3D 0) + spacc_close(&priv->spacc, tctx->handle); + + put_device(tctx->dev); +} + +static int spacc_hash_init(struct ahash_request *req) +{ + int rc =3D 0; + struct crypto_ahash *reqtfm =3D crypto_ahash_reqtfm(req); + struct spacc_crypto_ctx *tctx =3D crypto_ahash_ctx(reqtfm); + struct spacc_crypto_reqctx *ctx =3D ahash_request_ctx(req); + + ahash_request_set_tfm(&ctx->fb.hash_req, tctx->fb.hash); + + ahash_request_set_callback(&ctx->fb.hash_req, + CRYPTO_TFM_REQ_MAY_SLEEP, + req->base.complete, + req->base.data); + + rc =3D crypto_ahash_init(&ctx->fb.hash_req); + + return rc; +} + +static int spacc_hash_update(struct ahash_request *req) +{ + int rc =3D 0; + int nbytes =3D req->nbytes; + + struct crypto_ahash *reqtfm =3D crypto_ahash_reqtfm(req); + struct spacc_crypto_ctx *tctx =3D crypto_ahash_ctx(reqtfm); + struct spacc_crypto_reqctx *ctx =3D ahash_request_ctx(req); + + if (!nbytes) + return 0; + + ahash_request_set_tfm(&ctx->fb.hash_req, tctx->fb.hash); + + ahash_request_set_callback(&ctx->fb.hash_req, + CRYPTO_TFM_REQ_MAY_SLEEP, + req->base.complete, + req->base.data); + + ctx->fb.hash_req.nbytes =3D nbytes; + ctx->fb.hash_req.src =3D req->src; + + rc =3D crypto_ahash_update(&ctx->fb.hash_req); + + return rc; +} + +static int spacc_hash_final(struct ahash_request *req) +{ + int rc =3D 0; + struct crypto_ahash *reqtfm =3D crypto_ahash_reqtfm(req); + struct spacc_crypto_ctx *tctx =3D crypto_ahash_ctx(reqtfm); + struct spacc_crypto_reqctx *ctx =3D ahash_request_ctx(req); + + ahash_request_set_tfm(&ctx->fb.hash_req, tctx->fb.hash); + + ahash_request_set_callback(&ctx->fb.hash_req, + CRYPTO_TFM_REQ_MAY_SLEEP, + req->base.complete, + req->base.data); + + ctx->fb.hash_req.result =3D req->result; + + rc =3D crypto_ahash_final(&ctx->fb.hash_req); + + return rc; +} + +static int spacc_hash_digest(struct ahash_request *req) +{ + int rc =3D 0; + struct crypto_ahash *reqtfm =3D crypto_ahash_reqtfm(req); + struct spacc_crypto_ctx *tctx =3D crypto_ahash_ctx(reqtfm); + struct spacc_crypto_reqctx *ctx =3D ahash_request_ctx(req); + struct spacc_priv *priv =3D dev_get_drvdata(tctx->dev); + const struct spacc_alg *salg =3D spacc_tfm_ahash(&reqtfm->base); + + /* direct single shot digest call */ + ctx->single_shot =3D 1; + ctx->total_nents =3D sg_nents(req->src); + + /* alloc tmp_sgl */ + tctx->tmp_sgl =3D kmalloc(sizeof(*tctx->tmp_sgl) * 2, GFP_KERNEL); + + if (!tctx->tmp_sgl) + return -ENOMEM; + + sg_init_table(tctx->tmp_sgl, 2); + tctx->tmp_sgl[0].length =3D 0; + + if (tctx->handle < 0 || !tctx->ctx_valid) { + priv =3D NULL; + priv =3D dev_get_drvdata(salg->dev); + tctx->dev =3D get_device(salg->dev); + + rc =3D spacc_is_mode_keysize_supported(&priv->spacc, + salg->mode->id, 0, 1); + if (rc) + tctx->handle =3D spacc_open(&priv->spacc, + CRYPTO_MODE_NULL, + salg->mode->id, -1, 0, + spacc_digest_cb, + reqtfm); + if (tctx->handle < 0) { + put_device(salg->dev); + dev_dbg(salg->dev, + "Digest:failed to open spacc context\n"); + goto fallback; + } + + rc =3D spacc_set_operation(&priv->spacc, tctx->handle, + OP_ENCRYPT, ICV_HASH, IP_ICV_OFFSET, + 0, 0, 0); + if (rc < 0) { + spacc_close(&priv->spacc, tctx->handle); + dev_dbg(salg->dev, + "ERR: Failed to set operation\n"); + tctx->handle =3D -1; + put_device(tctx->dev); + goto fallback; + } + tctx->ctx_valid =3D true; + } + + rc =3D spacc_hash_init_dma(tctx->dev, req); + if (rc < 0) + goto fallback; + + if (rc =3D=3D 0) { + kfree(tctx->tmp_sgl); + tctx->tmp_sgl =3D NULL; + return 0; + } + + rc =3D spacc_packet_enqueue_ddt(&priv->spacc, ctx->acb.new_handle, + &ctx->src, &ctx->dst, req->nbytes, + 0, req->nbytes, 0, 0, 0); + + if (rc < 0) { + spacc_hash_cleanup_dma(tctx->dev, req); + spacc_close(&priv->spacc, ctx->acb.new_handle); + + if (rc !=3D -EBUSY) { + dev_err(salg->dev, "ERR: Failed to enqueue job: %d\n", + rc); + kfree(tctx->tmp_sgl); + tctx->tmp_sgl =3D NULL; + return rc; + } + + if (!(req->base.flags & CRYPTO_TFM_REQ_MAY_BACKLOG)) + return -EBUSY; + + goto fallback; + } + + return -EINPROGRESS; + +fallback: + kfree(tctx->tmp_sgl); + tctx->tmp_sgl =3D NULL; + + /* start from scratch as init is not called before digest */ + ahash_request_set_tfm(&ctx->fb.hash_req, tctx->fb.hash); + + ahash_request_set_callback(&ctx->fb.hash_req, + CRYPTO_TFM_REQ_MAY_SLEEP, + req->base.complete, + req->base.data); + + ctx->fb.hash_req.nbytes =3D req->nbytes; + ctx->fb.hash_req.src =3D req->src; + ctx->fb.hash_req.result =3D req->result; + + rc =3D crypto_ahash_digest(&ctx->fb.hash_req); + + return rc; +} + +static int spacc_hash_finup(struct ahash_request *req) +{ + int rc =3D 0; + struct crypto_ahash *reqtfm =3D crypto_ahash_reqtfm(req); + struct spacc_crypto_ctx *tctx =3D crypto_ahash_ctx(reqtfm); + struct spacc_crypto_reqctx *ctx =3D ahash_request_ctx(req); + + ahash_request_set_tfm(&ctx->fb.hash_req, tctx->fb.hash); + + ahash_request_set_callback(&ctx->fb.hash_req, + CRYPTO_TFM_REQ_MAY_SLEEP, + req->base.complete, + req->base.data); + + ctx->fb.hash_req.nbytes =3D req->nbytes; + ctx->fb.hash_req.src =3D req->src; + ctx->fb.hash_req.result =3D req->result; + + rc =3D crypto_ahash_finup(&ctx->fb.hash_req); + + return rc; +} + +static int spacc_hash_import(struct ahash_request *req, const void *in) +{ + int rc =3D 0; + struct crypto_ahash *reqtfm =3D crypto_ahash_reqtfm(req); + struct spacc_crypto_ctx *tctx =3D crypto_ahash_ctx(reqtfm); + struct spacc_crypto_reqctx *ctx =3D ahash_request_ctx(req); + + ahash_request_set_tfm(&ctx->fb.hash_req, tctx->fb.hash); + + ahash_request_set_callback(&ctx->fb.hash_req, + CRYPTO_TFM_REQ_MAY_SLEEP, + req->base.complete, + req->base.data); + + rc =3D crypto_ahash_import(&ctx->fb.hash_req, in); + + return rc; +} + +static int spacc_hash_export(struct ahash_request *req, void *out) +{ + int rc =3D 0; + struct crypto_ahash *reqtfm =3D crypto_ahash_reqtfm(req); + struct spacc_crypto_ctx *tctx =3D crypto_ahash_ctx(reqtfm); + struct spacc_crypto_reqctx *ctx =3D ahash_request_ctx(req); + + ahash_request_set_tfm(&ctx->fb.hash_req, tctx->fb.hash); + + ahash_request_set_callback(&ctx->fb.hash_req, + CRYPTO_TFM_REQ_MAY_SLEEP, + req->base.complete, + req->base.data); + + rc =3D crypto_ahash_export(&ctx->fb.hash_req, out); + + return rc; +} + +static const struct ahash_alg spacc_hash_template =3D { + .init =3D spacc_hash_init, + .update =3D spacc_hash_update, + .final =3D spacc_hash_final, + .finup =3D spacc_hash_finup, + .digest =3D spacc_hash_digest, + .setkey =3D spacc_hash_setkey, + .export =3D spacc_hash_export, + .import =3D spacc_hash_import, + .init_tfm =3D spacc_hash_init_tfm, + .exit_tfm =3D spacc_hash_exit_tfm, + + .halg.base =3D { + .cra_priority =3D 300, + .cra_module =3D THIS_MODULE, + .cra_ctxsize =3D sizeof(struct spacc_crypto_ctx), + .cra_flags =3D CRYPTO_ALG_TYPE_AHASH | + CRYPTO_ALG_ASYNC | + CRYPTO_ALG_NEED_FALLBACK | + CRYPTO_ALG_OPTIONAL_KEY + }, +}; + +static int spacc_register_hash(struct spacc_alg *salg) +{ + int rc =3D 0; + + salg->calg =3D &salg->alg.hash.halg.base; + salg->alg.hash =3D spacc_hash_template; + + spacc_init_calg(salg->calg, salg->mode); + salg->alg.hash.halg.digestsize =3D salg->mode->hashlen; + salg->alg.hash.halg.statesize =3D spacc_set_statesize(salg); + + rc =3D crypto_register_ahash(&salg->alg.hash); + if (rc < 0) + return rc; + + mutex_lock(&spacc_hash_alg_mutex); + list_add(&salg->list, &spacc_hash_alg_list); + mutex_unlock(&spacc_hash_alg_mutex); + + return 0; +} + +int spacc_probe_hashes(struct platform_device *spacc_pdev) +{ + int rc =3D 0; + unsigned int index; + int registered =3D 0; + struct spacc_alg *salg; + struct spacc_priv *priv =3D dev_get_drvdata(&spacc_pdev->dev); + + spacc_hash_pool =3D dma_pool_create("spacc-digest", &spacc_pdev->dev, + SPACC_MAX_DIGEST_SIZE, + SPACC_DMA_ALIGN, SPACC_DMA_BOUNDARY); + + if (!spacc_hash_pool) + return -ENOMEM; + + for (index =3D 0; index < ARRAY_SIZE(possible_hashes); index++) + possible_hashes[index].valid =3D 0; + + for (index =3D 0; index < ARRAY_SIZE(possible_hashes); index++) { + if (possible_hashes[index].valid =3D=3D 0 && + spacc_is_mode_keysize_supported(&priv->spacc, + possible_hashes[index].id & 0xFF, + possible_hashes[index].hashlen, 1)) { + salg =3D kmalloc(sizeof(*salg), GFP_KERNEL); + if (!salg) + return -ENOMEM; + + salg->mode =3D &possible_hashes[index]; + + /* Copy all dev's over to the salg */ + salg->dev =3D &spacc_pdev->dev; + + rc =3D spacc_register_hash(salg); + if (rc < 0) { + kfree(salg); + continue; + } + + registered++; + possible_hashes[index].valid =3D 1; + } + } + + return registered; +} + +int spacc_unregister_hash_algs(void) +{ + struct spacc_alg *salg, *tmp; + + mutex_lock(&spacc_hash_alg_mutex); + list_for_each_entry_safe(salg, tmp, &spacc_hash_alg_list, list) { + crypto_unregister_alg(salg->calg); + list_del(&salg->list); + kfree(salg); + } + mutex_unlock(&spacc_hash_alg_mutex); + + dma_pool_destroy(spacc_hash_pool); + + return 0; +} --=20 2.25.1