From nobody Sun Feb 8 16:12:43 2026 Received: from lelvem-ot02.ext.ti.com (lelvem-ot02.ext.ti.com [198.47.23.235]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 44C902D0C60; Wed, 22 Oct 2025 18:03:28 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.47.23.235 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761156211; cv=none; b=f7R2ZQxXYVGoUdBxJzeLjQCnIpSXL/f84M7No1WyNCfns8uNcBRwpQqk40Uq4zJSL09J/mMk1RHHBe5Tuz+lv/kO/KYpchxUZjQchlFP9NwCg1Kv6iLZMvejuWTrC0sh6nJDXh379NUQ/VPr1ZB8Cc91/3Zmp6zUUPjNPXq1Y8g= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761156211; c=relaxed/simple; bh=JPZRN7mzezWnFC/W6Q51GzaUHqXx+aOo+nlu0GjQS4s=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=Sc9Qy190w9r0bd9MxivzXMrNWhx/gWk0e7NjY5MY+eGTtRryrEfPIdonHCu+kF4lEP8htmIqhDSsqCUf0KcI328+HaF5HTOxeKXegPUUyhug9ewrmL6ux92YsO1uL52mo5u4yAHSVNZF8Ijuu7fzQDW8BCIEM2kmLz8gCBaH5Fo= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=ti.com; spf=pass smtp.mailfrom=ti.com; dkim=pass (1024-bit key) header.d=ti.com header.i=@ti.com header.b=uUFvm9vH; arc=none smtp.client-ip=198.47.23.235 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=ti.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=ti.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=ti.com header.i=@ti.com header.b="uUFvm9vH" Received: from lelvem-sh01.itg.ti.com ([10.180.77.71]) by lelvem-ot02.ext.ti.com (8.15.2/8.15.2) with ESMTP id 59MI3N85331300; Wed, 22 Oct 2025 13:03:23 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ti.com; s=ti-com-17Q1; t=1761156203; bh=pxIECgKKNGJ1U1Yfq19DbWbXtt6MeB6GEBsQ9jQvNK8=; h=From:To:CC:Subject:Date:In-Reply-To:References; b=uUFvm9vH5dj6YoPgnKrtzqy/LatjXAznvEub3GIi4o6tfJTUWCR3U2L73kq6AsrZJ 3At7pzh2i7nHOL0BN427x/EgGi+lJZv/A40qUZSOO1wbQ55qOt1j/+M5mdQYpWNaJK h8O6K/LzFfn5CLuIFOVEKxnBCI0PJnIZuqS6GwpE= Received: from DLEE213.ent.ti.com (dlee213.ent.ti.com [157.170.170.116]) by lelvem-sh01.itg.ti.com (8.18.1/8.18.1) with ESMTPS id 59MI3NqH2112043 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL); Wed, 22 Oct 2025 13:03:23 -0500 Received: from DLEE206.ent.ti.com (157.170.170.90) by DLEE213.ent.ti.com (157.170.170.116) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.2562.20; Wed, 22 Oct 2025 13:03:22 -0500 Received: from lelvem-mr05.itg.ti.com (10.180.75.9) by DLEE206.ent.ti.com (157.170.170.90) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.2562.20 via Frontend Transport; Wed, 22 Oct 2025 13:03:22 -0500 Received: from pratham-Workstation-PC (pratham-workstation-pc.dhcp.ti.com [10.24.69.191]) by lelvem-mr05.itg.ti.com (8.18.1/8.18.1) with ESMTP id 59MI3LGm1602118; Wed, 22 Oct 2025 13:03:22 -0500 From: T Pratham To: T Pratham , Herbert Xu , "David S. Miller" CC: Manorit Chawdhry , Kamlesh Gurudasani , Shiva Tripathi , Kavitha Malarvizhi , Vishal Mahaveer , Praneeth Bajjuri , , Subject: [PATCH v5 1/4] crypto: ti - Add support for AES-XTS in DTHEv2 driver Date: Wed, 22 Oct 2025 23:15:39 +0530 Message-ID: <20251022180302.729728-2-t-pratham@ti.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20251022180302.729728-1-t-pratham@ti.com> References: <20251022180302.729728-1-t-pratham@ti.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-C2ProcessedOrg: 333ef613-75bf-4e12-a4b1-8e3623f5dcea Content-Type: text/plain; charset="utf-8" Add support for XTS mode of operation for AES algorithm in the AES Engine of the DTHEv2 hardware cryptographic engine. Signed-off-by: T Pratham --- drivers/crypto/ti/Kconfig | 1 + drivers/crypto/ti/dthev2-aes.c | 137 ++++++++++++++++++++++++++++-- drivers/crypto/ti/dthev2-common.h | 10 ++- 3 files changed, 141 insertions(+), 7 deletions(-) diff --git a/drivers/crypto/ti/Kconfig b/drivers/crypto/ti/Kconfig index d4f91c1e0cb55..a3692ceec49bc 100644 --- a/drivers/crypto/ti/Kconfig +++ b/drivers/crypto/ti/Kconfig @@ -6,6 +6,7 @@ config CRYPTO_DEV_TI_DTHEV2 select CRYPTO_SKCIPHER select CRYPTO_ECB select CRYPTO_CBC + select CRYPTO_XTS help This enables support for the TI DTHE V2 hw cryptography engine which can be found on TI K3 SOCs. Selecting this enables use diff --git a/drivers/crypto/ti/dthev2-aes.c b/drivers/crypto/ti/dthev2-aes.c index 3547c41fa4ed3..156729ccc50ec 100644 --- a/drivers/crypto/ti/dthev2-aes.c +++ b/drivers/crypto/ti/dthev2-aes.c @@ -25,6 +25,7 @@ =20 // AES Engine #define DTHE_P_AES_BASE 0x7000 + #define DTHE_P_AES_KEY1_0 0x0038 #define DTHE_P_AES_KEY1_1 0x003C #define DTHE_P_AES_KEY1_2 0x0030 @@ -33,6 +34,16 @@ #define DTHE_P_AES_KEY1_5 0x002C #define DTHE_P_AES_KEY1_6 0x0020 #define DTHE_P_AES_KEY1_7 0x0024 + +#define DTHE_P_AES_KEY2_0 0x0018 +#define DTHE_P_AES_KEY2_1 0x001C +#define DTHE_P_AES_KEY2_2 0x0010 +#define DTHE_P_AES_KEY2_3 0x0014 +#define DTHE_P_AES_KEY2_4 0x0008 +#define DTHE_P_AES_KEY2_5 0x000C +#define DTHE_P_AES_KEY2_6 0x0000 +#define DTHE_P_AES_KEY2_7 0x0004 + #define DTHE_P_AES_IV_IN_0 0x0040 #define DTHE_P_AES_IV_IN_1 0x0044 #define DTHE_P_AES_IV_IN_2 0x0048 @@ -52,6 +63,7 @@ enum aes_ctrl_mode_masks { AES_CTRL_ECB_MASK =3D 0x00, AES_CTRL_CBC_MASK =3D BIT(5), + AES_CTRL_XTS_MASK =3D BIT(12) | BIT(11), }; =20 #define DTHE_AES_CTRL_MODE_CLEAR_MASK ~GENMASK(28, 5) @@ -88,6 +100,31 @@ static int dthe_cipher_init_tfm(struct crypto_skcipher = *tfm) return 0; } =20 +static int dthe_cipher_xts_init_tfm(struct crypto_skcipher *tfm) +{ + struct dthe_tfm_ctx *ctx =3D crypto_skcipher_ctx(tfm); + struct dthe_data *dev_data =3D dthe_get_dev(ctx); + + ctx->dev_data =3D dev_data; + ctx->keylen =3D 0; + + ctx->skcipher_fb =3D crypto_alloc_sync_skcipher("xts(aes)", 0, + CRYPTO_ALG_NEED_FALLBACK); + if (IS_ERR(ctx->skcipher_fb)) { + dev_err(dev_data->dev, "fallback driver xts(aes) couldn't be loaded\n"); + return PTR_ERR(ctx->skcipher_fb); + } + + return 0; +} + +static void dthe_cipher_xts_exit_tfm(struct crypto_skcipher *tfm) +{ + struct dthe_tfm_ctx *ctx =3D crypto_skcipher_ctx(tfm); + + crypto_free_sync_skcipher(ctx->skcipher_fb); +} + static int dthe_aes_setkey(struct crypto_skcipher *tfm, const u8 *key, uns= igned int keylen) { struct dthe_tfm_ctx *ctx =3D crypto_skcipher_ctx(tfm); @@ -119,6 +156,27 @@ static int dthe_aes_cbc_setkey(struct crypto_skcipher = *tfm, const u8 *key, unsig return dthe_aes_setkey(tfm, key, keylen); } =20 +static int dthe_aes_xts_setkey(struct crypto_skcipher *tfm, const u8 *key,= unsigned int keylen) +{ + struct dthe_tfm_ctx *ctx =3D crypto_skcipher_ctx(tfm); + + if (keylen !=3D 2 * AES_KEYSIZE_128 && + keylen !=3D 2 * AES_KEYSIZE_192 && + keylen !=3D 2 * AES_KEYSIZE_256) + return -EINVAL; + + ctx->aes_mode =3D DTHE_AES_XTS; + ctx->keylen =3D keylen / 2; + memcpy(ctx->key, key, keylen); + + crypto_sync_skcipher_clear_flags(ctx->skcipher_fb, CRYPTO_TFM_REQ_MASK); + crypto_sync_skcipher_set_flags(ctx->skcipher_fb, + crypto_skcipher_get_flags(tfm) & + CRYPTO_TFM_REQ_MASK); + + return crypto_sync_skcipher_setkey(ctx->skcipher_fb, key, keylen); +} + static void dthe_aes_set_ctrl_key(struct dthe_tfm_ctx *ctx, struct dthe_aes_req_ctx *rctx, u32 *iv_in) @@ -141,6 +199,24 @@ static void dthe_aes_set_ctrl_key(struct dthe_tfm_ctx = *ctx, writel_relaxed(ctx->key[7], aes_base_reg + DTHE_P_AES_KEY1_7); } =20 + if (ctx->aes_mode =3D=3D DTHE_AES_XTS) { + size_t key2_offset =3D ctx->keylen / sizeof(u32); + + writel_relaxed(ctx->key[key2_offset + 0], aes_base_reg + DTHE_P_AES_KEY2= _0); + writel_relaxed(ctx->key[key2_offset + 1], aes_base_reg + DTHE_P_AES_KEY2= _1); + writel_relaxed(ctx->key[key2_offset + 2], aes_base_reg + DTHE_P_AES_KEY2= _2); + writel_relaxed(ctx->key[key2_offset + 3], aes_base_reg + DTHE_P_AES_KEY2= _3); + + if (ctx->keylen > AES_KEYSIZE_128) { + writel_relaxed(ctx->key[key2_offset + 4], aes_base_reg + DTHE_P_AES_KEY= 2_4); + writel_relaxed(ctx->key[key2_offset + 5], aes_base_reg + DTHE_P_AES_KEY= 2_5); + } + if (ctx->keylen =3D=3D AES_KEYSIZE_256) { + writel_relaxed(ctx->key[key2_offset + 6], aes_base_reg + DTHE_P_AES_KEY= 2_6); + writel_relaxed(ctx->key[key2_offset + 7], aes_base_reg + DTHE_P_AES_KEY= 2_7); + } + } + if (rctx->enc) ctrl_val |=3D DTHE_AES_CTRL_DIR_ENC; =20 @@ -160,6 +236,9 @@ static void dthe_aes_set_ctrl_key(struct dthe_tfm_ctx *= ctx, case DTHE_AES_CBC: ctrl_val |=3D AES_CTRL_CBC_MASK; break; + case DTHE_AES_XTS: + ctrl_val |=3D AES_CTRL_XTS_MASK; + break; } =20 if (iv_in) { @@ -315,24 +394,45 @@ static int dthe_aes_run(struct crypto_engine *engine,= void *areq) local_bh_disable(); crypto_finalize_skcipher_request(dev_data->engine, req, ret); local_bh_enable(); - return ret; + return 0; } =20 static int dthe_aes_crypt(struct skcipher_request *req) { struct dthe_tfm_ctx *ctx =3D crypto_skcipher_ctx(crypto_skcipher_reqtfm(r= eq)); + struct dthe_aes_req_ctx *rctx =3D skcipher_request_ctx(req); struct dthe_data *dev_data =3D dthe_get_dev(ctx); struct crypto_engine *engine; =20 /* - * If data is not a multiple of AES_BLOCK_SIZE, need to return -EINVAL - * If data length input is zero, no need to do any operation. + * If data is not a multiple of AES_BLOCK_SIZE: + * - need to return -EINVAL for ECB, CBC as they are block ciphers + * - need to fallback to software as H/W doesn't support Ciphertext Steal= ing for XTS */ - if (req->cryptlen % AES_BLOCK_SIZE) + if (req->cryptlen % AES_BLOCK_SIZE) { + if (ctx->aes_mode =3D=3D DTHE_AES_XTS) { + SYNC_SKCIPHER_REQUEST_ON_STACK(subreq, ctx->skcipher_fb); + + skcipher_request_set_callback(subreq, skcipher_request_flags(req), + req->base.complete, req->base.data); + skcipher_request_set_crypt(subreq, req->src, req->dst, + req->cryptlen, req->iv); + + return rctx->enc ? crypto_skcipher_encrypt(subreq) : + crypto_skcipher_decrypt(subreq); + } return -EINVAL; + } =20 - if (req->cryptlen =3D=3D 0) + /* + * If data length input is zero, no need to do any operation. + * Except for XTS mode, where data length should be non-zero. + */ + if (req->cryptlen =3D=3D 0) { + if (ctx->aes_mode =3D=3D DTHE_AES_XTS) + return -EINVAL; return 0; + } =20 engine =3D dev_data->engine; return crypto_transfer_skcipher_request_to_engine(engine, req); @@ -399,7 +499,32 @@ static struct skcipher_engine_alg cipher_algs[] =3D { .cra_module =3D THIS_MODULE, }, .op.do_one_request =3D dthe_aes_run, - } /* CBC AES */ + }, /* CBC AES */ + { + .base.init =3D dthe_cipher_xts_init_tfm, + .base.exit =3D dthe_cipher_xts_exit_tfm, + .base.setkey =3D dthe_aes_xts_setkey, + .base.encrypt =3D dthe_aes_encrypt, + .base.decrypt =3D dthe_aes_decrypt, + .base.min_keysize =3D AES_MIN_KEY_SIZE * 2, + .base.max_keysize =3D AES_MAX_KEY_SIZE * 2, + .base.ivsize =3D AES_IV_SIZE, + .base.base =3D { + .cra_name =3D "xts(aes)", + .cra_driver_name =3D "xts-aes-dthev2", + .cra_priority =3D 299, + .cra_flags =3D CRYPTO_ALG_TYPE_SKCIPHER | + CRYPTO_ALG_ASYNC | + CRYPTO_ALG_KERN_DRIVER_ONLY | + CRYPTO_ALG_NEED_FALLBACK, + .cra_alignmask =3D AES_BLOCK_SIZE - 1, + .cra_blocksize =3D AES_BLOCK_SIZE, + .cra_ctxsize =3D sizeof(struct dthe_tfm_ctx), + .cra_reqsize =3D sizeof(struct dthe_aes_req_ctx), + .cra_module =3D THIS_MODULE, + }, + .op.do_one_request =3D dthe_aes_run, + }, /* XTS AES */ }; =20 int dthe_register_aes_algs(void) diff --git a/drivers/crypto/ti/dthev2-common.h b/drivers/crypto/ti/dthev2-c= ommon.h index 68c94acda8aaa..c7a06a4c353ff 100644 --- a/drivers/crypto/ti/dthev2-common.h +++ b/drivers/crypto/ti/dthev2-common.h @@ -27,10 +27,16 @@ =20 #define DTHE_REG_SIZE 4 #define DTHE_DMA_TIMEOUT_MS 2000 +/* + * Size of largest possible key (of all algorithms) to be stored in dthe_t= fm_ctx + * This is currently the keysize of XTS-AES-256 which is 512 bits (64 byte= s) + */ +#define DTHE_MAX_KEYSIZE (AES_MAX_KEY_SIZE * 2) =20 enum dthe_aes_mode { DTHE_AES_ECB =3D 0, DTHE_AES_CBC, + DTHE_AES_XTS, }; =20 /* Driver specific struct definitions */ @@ -73,12 +79,14 @@ struct dthe_list { * @keylen: AES key length * @key: AES key * @aes_mode: AES mode + * @skcipher_fb: Fallback crypto skcipher handle for AES-XTS mode */ struct dthe_tfm_ctx { struct dthe_data *dev_data; unsigned int keylen; - u32 key[AES_KEYSIZE_256 / sizeof(u32)]; + u32 key[DTHE_MAX_KEYSIZE / sizeof(u32)]; enum dthe_aes_mode aes_mode; + struct crypto_sync_skcipher *skcipher_fb; }; =20 /** --=20 2.43.0 From nobody Sun Feb 8 16:12:43 2026 Received: from fllvem-ot04.ext.ti.com (fllvem-ot04.ext.ti.com [198.47.19.246]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 83D4A2E8B76; Wed, 22 Oct 2025 18:03:31 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.47.19.246 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761156213; cv=none; b=kL7MoIn3chgoFLmlGox97qzyP5JVqgf8rfE4KajKpNUW4JBCt/+pZ8xMepuYUzSI2W3IPL0XSgS3IU7nKQqQ5gBhxx4Avm4KSavxVxUkkY7tVHwwLreRIa9npuJPKTxW44eLbapYz5KZs9Hws5QLuu9YD3gVpMU6RmPqdRHnrQg= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761156213; c=relaxed/simple; bh=VkK27ZhfmincAg70vfPA3cZUu1H6JlSdE01gXAm7KRA=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=oKHPF4Rk9pQGAi6+Apt7z1SgSwI8k29AkW//9++Zbp0+s+YJ1iIAO+p0QXSlceV6RqWrGUMu0wsntbfJIdVcK1wSvZwGEQkvEuCf/yzbiGyRXNqL0X2ICzqfQxbaZJq0PMLnUfEyDNELx7IkohW3nHCLhxyQLbpG/uVH1Rqzn9s= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=ti.com; spf=pass smtp.mailfrom=ti.com; dkim=pass (1024-bit key) header.d=ti.com header.i=@ti.com header.b=MEqMKGLw; arc=none smtp.client-ip=198.47.19.246 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=ti.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=ti.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=ti.com header.i=@ti.com header.b="MEqMKGLw" Received: from lelvem-sh02.itg.ti.com ([10.180.78.226]) by fllvem-ot04.ext.ti.com (8.15.2/8.15.2) with ESMTP id 59MI3Qup329071; Wed, 22 Oct 2025 13:03:26 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ti.com; s=ti-com-17Q1; t=1761156206; bh=j4n2M5nh9LshvIgR4+lBY4FdiOnBtOnEonMgz/sBspk=; h=From:To:CC:Subject:Date:In-Reply-To:References; b=MEqMKGLwcciTDo2dcemV3vmOJ2jTt20dhErjaMR2X1N1MwV7iUVCn6vrPrQxttwE5 R/tafwLwA+jfKZoN31Yh28aYonV8UvTWFyY1o5UQsMYWIScJCkJctZehABwz8BzqMD M1WmAbVD/zfP7bD/FrOxVbb30DwZoF3+20UW3nqA= Received: from DFLE214.ent.ti.com (dfle214.ent.ti.com [10.64.6.72]) by lelvem-sh02.itg.ti.com (8.18.1/8.18.1) with ESMTPS id 59MI3QCo2531883 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL); Wed, 22 Oct 2025 13:03:26 -0500 Received: from DFLE210.ent.ti.com (10.64.6.68) by DFLE214.ent.ti.com (10.64.6.72) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.2562.20; Wed, 22 Oct 2025 13:03:26 -0500 Received: from lelvem-mr05.itg.ti.com (10.180.75.9) by DFLE210.ent.ti.com (10.64.6.68) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.2562.20 via Frontend Transport; Wed, 22 Oct 2025 13:03:26 -0500 Received: from pratham-Workstation-PC (pratham-workstation-pc.dhcp.ti.com [10.24.69.191]) by lelvem-mr05.itg.ti.com (8.18.1/8.18.1) with ESMTP id 59MI3PYj1602193; Wed, 22 Oct 2025 13:03:25 -0500 From: T Pratham To: T Pratham , Herbert Xu , "David S. Miller" CC: Manorit Chawdhry , Kamlesh Gurudasani , Shiva Tripathi , Kavitha Malarvizhi , Vishal Mahaveer , Praneeth Bajjuri , , Subject: [PATCH v5 2/4] crypto: ti - Add support for AES-CTR in DTHEv2 driver Date: Wed, 22 Oct 2025 23:15:40 +0530 Message-ID: <20251022180302.729728-3-t-pratham@ti.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20251022180302.729728-1-t-pratham@ti.com> References: <20251022180302.729728-1-t-pratham@ti.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-C2ProcessedOrg: 333ef613-75bf-4e12-a4b1-8e3623f5dcea Content-Type: text/plain; charset="utf-8" Add support for CTR mode of operation for AES algorithm in the AES Engine of the DTHEv2 hardware cryptographic engine. Signed-off-by: T Pratham --- drivers/crypto/ti/Kconfig | 1 + drivers/crypto/ti/dthev2-aes.c | 101 +++++++++++++++++++++++++++--- drivers/crypto/ti/dthev2-common.c | 19 ++++++ drivers/crypto/ti/dthev2-common.h | 15 +++++ 4 files changed, 127 insertions(+), 9 deletions(-) diff --git a/drivers/crypto/ti/Kconfig b/drivers/crypto/ti/Kconfig index a3692ceec49bc..6027e12de279d 100644 --- a/drivers/crypto/ti/Kconfig +++ b/drivers/crypto/ti/Kconfig @@ -6,6 +6,7 @@ config CRYPTO_DEV_TI_DTHEV2 select CRYPTO_SKCIPHER select CRYPTO_ECB select CRYPTO_CBC + select CRYPTO_CTR select CRYPTO_XTS help This enables support for the TI DTHE V2 hw cryptography engine diff --git a/drivers/crypto/ti/dthev2-aes.c b/drivers/crypto/ti/dthev2-aes.c index 156729ccc50ec..bf55167f7871d 100644 --- a/drivers/crypto/ti/dthev2-aes.c +++ b/drivers/crypto/ti/dthev2-aes.c @@ -63,6 +63,7 @@ enum aes_ctrl_mode_masks { AES_CTRL_ECB_MASK =3D 0x00, AES_CTRL_CBC_MASK =3D BIT(5), + AES_CTRL_CTR_MASK =3D BIT(6), AES_CTRL_XTS_MASK =3D BIT(12) | BIT(11), }; =20 @@ -74,6 +75,8 @@ enum aes_ctrl_mode_masks { #define DTHE_AES_CTRL_KEYSIZE_24B BIT(4) #define DTHE_AES_CTRL_KEYSIZE_32B (BIT(3) | BIT(4)) =20 +#define DTHE_AES_CTRL_CTR_WIDTH_128B (BIT(7) | BIT(8)) + #define DTHE_AES_CTRL_SAVE_CTX_SET BIT(29) =20 #define DTHE_AES_CTRL_OUTPUT_READY BIT_MASK(0) @@ -156,6 +159,15 @@ static int dthe_aes_cbc_setkey(struct crypto_skcipher = *tfm, const u8 *key, unsig return dthe_aes_setkey(tfm, key, keylen); } =20 +static int dthe_aes_ctr_setkey(struct crypto_skcipher *tfm, const u8 *key,= unsigned int keylen) +{ + struct dthe_tfm_ctx *ctx =3D crypto_skcipher_ctx(tfm); + + ctx->aes_mode =3D DTHE_AES_CTR; + + return dthe_aes_setkey(tfm, key, keylen); +} + static int dthe_aes_xts_setkey(struct crypto_skcipher *tfm, const u8 *key,= unsigned int keylen) { struct dthe_tfm_ctx *ctx =3D crypto_skcipher_ctx(tfm); @@ -236,6 +248,10 @@ static void dthe_aes_set_ctrl_key(struct dthe_tfm_ctx = *ctx, case DTHE_AES_CBC: ctrl_val |=3D AES_CTRL_CBC_MASK; break; + case DTHE_AES_CTR: + ctrl_val |=3D AES_CTRL_CTR_MASK; + ctrl_val |=3D DTHE_AES_CTRL_CTR_WIDTH_128B; + break; case DTHE_AES_XTS: ctrl_val |=3D AES_CTRL_XTS_MASK; break; @@ -271,11 +287,14 @@ static int dthe_aes_run(struct crypto_engine *engine,= void *areq) struct scatterlist *dst =3D req->dst; =20 int src_nents =3D sg_nents_for_len(src, len); - int dst_nents; + int dst_nents =3D sg_nents_for_len(dst, len); =20 int src_mapped_nents; int dst_mapped_nents; =20 + u8 pad_buf[AES_BLOCK_SIZE] =3D {0}; + int pad_len =3D 0; + bool diff_dst; enum dma_data_direction src_dir, dst_dir; =20 @@ -295,6 +314,39 @@ static int dthe_aes_run(struct crypto_engine *engine, = void *areq) aes_irqenable_val |=3D DTHE_AES_IRQENABLE_EN_ALL; writel_relaxed(aes_irqenable_val, aes_base_reg + DTHE_P_AES_IRQENABLE); =20 + if (ctx->aes_mode =3D=3D DTHE_AES_CTR) { + /* + * CTR mode can operate on any input length, but the hardware + * requires input length to be a multiple of the block size. + * We need to handle the padding in the driver. + */ + if (req->cryptlen % AES_BLOCK_SIZE) { + /* Need to create a new SG list with padding */ + pad_len =3D ALIGN(req->cryptlen, AES_BLOCK_SIZE) - req->cryptlen; + struct scatterlist *sg; + + src =3D kmalloc_array((src_nents + 1), sizeof(*src), GFP_KERNEL); + if (!src) { + ret =3D -ENOMEM; + goto aes_src_alloc_err; + } + sg_init_table(src, src_nents + 1); + sg =3D dthe_copy_sg(src, req->src, req->cryptlen); + sg_set_buf(sg, pad_buf, pad_len); + src_nents++; + + dst =3D kmalloc_array(dst_nents + 1, sizeof(*dst), GFP_KERNEL); + if (!dst) { + ret =3D -ENOMEM; + goto aes_dst_alloc_err; + } + sg_init_table(dst, dst_nents + 1); + sg =3D dthe_copy_sg(dst, req->dst, req->cryptlen); + sg_set_buf(sg, pad_buf, pad_len); + dst_nents++; + } + } + if (src =3D=3D dst) { diff_dst =3D false; src_dir =3D DMA_BIDIRECTIONAL; @@ -311,19 +363,16 @@ static int dthe_aes_run(struct crypto_engine *engine,= void *areq) src_mapped_nents =3D dma_map_sg(tx_dev, src, src_nents, src_dir); if (src_mapped_nents =3D=3D 0) { ret =3D -EINVAL; - goto aes_err; + goto aes_map_src_err; } =20 if (!diff_dst) { - dst_nents =3D src_nents; dst_mapped_nents =3D src_mapped_nents; } else { - dst_nents =3D sg_nents_for_len(dst, len); dst_mapped_nents =3D dma_map_sg(rx_dev, dst, dst_nents, dst_dir); if (dst_mapped_nents =3D=3D 0) { - dma_unmap_sg(tx_dev, src, src_nents, src_dir); ret =3D -EINVAL; - goto aes_err; + goto aes_map_dst_err; } } =20 @@ -386,11 +435,19 @@ static int dthe_aes_run(struct crypto_engine *engine,= void *areq) } =20 aes_prep_err: - dma_unmap_sg(tx_dev, src, src_nents, src_dir); if (dst_dir !=3D DMA_BIDIRECTIONAL) dma_unmap_sg(rx_dev, dst, dst_nents, dst_dir); +aes_map_dst_err: + dma_unmap_sg(tx_dev, src, src_nents, src_dir); + +aes_map_src_err: + if (ctx->aes_mode =3D=3D DTHE_AES_CTR && req->cryptlen % AES_BLOCK_SIZE) { + kfree(dst); +aes_dst_alloc_err: + kfree(src); + } =20 -aes_err: +aes_src_alloc_err: local_bh_disable(); crypto_finalize_skcipher_request(dev_data->engine, req, ret); local_bh_enable(); @@ -408,6 +465,7 @@ static int dthe_aes_crypt(struct skcipher_request *req) * If data is not a multiple of AES_BLOCK_SIZE: * - need to return -EINVAL for ECB, CBC as they are block ciphers * - need to fallback to software as H/W doesn't support Ciphertext Steal= ing for XTS + * - do nothing for CTR */ if (req->cryptlen % AES_BLOCK_SIZE) { if (ctx->aes_mode =3D=3D DTHE_AES_XTS) { @@ -421,7 +479,8 @@ static int dthe_aes_crypt(struct skcipher_request *req) return rctx->enc ? crypto_skcipher_encrypt(subreq) : crypto_skcipher_decrypt(subreq); } - return -EINVAL; + if (ctx->aes_mode !=3D DTHE_AES_CTR) + return -EINVAL; } =20 /* @@ -500,6 +559,30 @@ static struct skcipher_engine_alg cipher_algs[] =3D { }, .op.do_one_request =3D dthe_aes_run, }, /* CBC AES */ + { + .base.init =3D dthe_cipher_init_tfm, + .base.setkey =3D dthe_aes_ctr_setkey, + .base.encrypt =3D dthe_aes_encrypt, + .base.decrypt =3D dthe_aes_decrypt, + .base.min_keysize =3D AES_MIN_KEY_SIZE, + .base.max_keysize =3D AES_MAX_KEY_SIZE, + .base.ivsize =3D AES_IV_SIZE, + .base.chunksize =3D AES_BLOCK_SIZE, + .base.base =3D { + .cra_name =3D "ctr(aes)", + .cra_driver_name =3D "ctr-aes-dthev2", + .cra_priority =3D 299, + .cra_flags =3D CRYPTO_ALG_TYPE_SKCIPHER | + CRYPTO_ALG_ASYNC | + CRYPTO_ALG_KERN_DRIVER_ONLY | + CRYPTO_ALG_ALLOCATES_MEMORY, + .cra_blocksize =3D 1, + .cra_ctxsize =3D sizeof(struct dthe_tfm_ctx), + .cra_reqsize =3D sizeof(struct dthe_aes_req_ctx), + .cra_module =3D THIS_MODULE, + }, + .op.do_one_request =3D dthe_aes_run, + }, /* CTR AES */ { .base.init =3D dthe_cipher_xts_init_tfm, .base.exit =3D dthe_cipher_xts_exit_tfm, diff --git a/drivers/crypto/ti/dthev2-common.c b/drivers/crypto/ti/dthev2-c= ommon.c index c39d37933b9ee..a2ad79bec105a 100644 --- a/drivers/crypto/ti/dthev2-common.c +++ b/drivers/crypto/ti/dthev2-common.c @@ -48,6 +48,25 @@ struct dthe_data *dthe_get_dev(struct dthe_tfm_ctx *ctx) return dev_data; } =20 +struct scatterlist *dthe_copy_sg(struct scatterlist *dst, + struct scatterlist *src, + int buflen) +{ + struct scatterlist *from_sg, *to_sg; + int sglen; + + for (to_sg =3D dst, from_sg =3D src; buflen && from_sg; buflen -=3D sglen= ) { + sglen =3D from_sg->length; + if (sglen > buflen) + sglen =3D buflen; + sg_set_buf(to_sg, sg_virt(from_sg), sglen); + from_sg =3D sg_next(from_sg); + to_sg =3D sg_next(to_sg); + } + + return to_sg; +} + static int dthe_dma_init(struct dthe_data *dev_data) { int ret; diff --git a/drivers/crypto/ti/dthev2-common.h b/drivers/crypto/ti/dthev2-c= ommon.h index c7a06a4c353ff..f12b94d64e134 100644 --- a/drivers/crypto/ti/dthev2-common.h +++ b/drivers/crypto/ti/dthev2-common.h @@ -36,6 +36,7 @@ enum dthe_aes_mode { DTHE_AES_ECB =3D 0, DTHE_AES_CBC, + DTHE_AES_CTR, DTHE_AES_XTS, }; =20 @@ -103,6 +104,20 @@ struct dthe_aes_req_ctx { =20 struct dthe_data *dthe_get_dev(struct dthe_tfm_ctx *ctx); =20 +/** + * dthe_copy_sg - Copy sg entries from src to dst + * @dst: Destination sg to be filled + * @src: Source sg to be copied from + * @buflen: Number of bytes to be copied + * + * Description: + * Copy buflen bytes of data from src to dst. + * + **/ +struct scatterlist *dthe_copy_sg(struct scatterlist *dst, + struct scatterlist *src, + int buflen); + int dthe_register_aes_algs(void); void dthe_unregister_aes_algs(void); =20 --=20 2.43.0 From nobody Sun Feb 8 16:12:43 2026 Received: from lelvem-ot01.ext.ti.com (lelvem-ot01.ext.ti.com [198.47.23.234]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id C40EA2EAB7D; Wed, 22 Oct 2025 18:03:33 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.47.23.234 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761156216; cv=none; b=fvMAagd5eCo4jsQkN0XeKpHDtHgU35dwsyNeyuvMMmrUBNZm/K4HQMu/C/BFrWtPlsyWwnJb0UZanDzPfofFEAKdeJEdnVphBy8rrmIXFG0SYa4tyAqE70FrZ/jFxUJBgt2Oo6yTRwwdaIsJuaGrR+kf9p7PO05el05ly1ap6OA= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761156216; c=relaxed/simple; bh=ZkEEx7T81zDKqiMC0Mz9S1d2aaFaZsuQZJwZ2+V9wtE=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=o2U2k5tmTQa33RFUwwm7B4Mn2Wd3oEfH2UnnxloABxLTjy5aB+kStSTJEoE5IcA2Z1x2UhgSPTXSxJrPtUM6MNZ6Ww4Lb8ASJyYRzRZVIMrQwvE7PTWtddEV5Mzv/TjSUl/bcjQOsM/g7QwRVHeUVSv04is4GYnTwp5WL260vYY= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=ti.com; spf=pass smtp.mailfrom=ti.com; dkim=pass (1024-bit key) header.d=ti.com header.i=@ti.com header.b=Ics3M+y3; arc=none smtp.client-ip=198.47.23.234 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=ti.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=ti.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=ti.com header.i=@ti.com header.b="Ics3M+y3" Received: from fllvem-sh03.itg.ti.com ([10.64.41.86]) by lelvem-ot01.ext.ti.com (8.15.2/8.15.2) with ESMTP id 59MI3U7o1477217; Wed, 22 Oct 2025 13:03:30 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ti.com; s=ti-com-17Q1; t=1761156210; bh=eB7dhMapV0UnqMFBwxqERyMEJbza5OtdtSXntRY/S6o=; h=From:To:CC:Subject:Date:In-Reply-To:References; b=Ics3M+y3fZl0bDrWWqwR/qq3wWMLe7HXcRrDIJP5Ti4tvdSzThnJIpbftjhGSxzXf mZ36g2/5hALwznx636vsAfhE2/gNSqzTHPQc92G3TyHABSAuq9+VBY6U13PMfqGMzF t1uOwFbNqGruNfQF3m2OwAGEpZ+7YrHafYqJhUq0= Received: from DFLE211.ent.ti.com (dfle211.ent.ti.com [10.64.6.69]) by fllvem-sh03.itg.ti.com (8.18.1/8.18.1) with ESMTPS id 59MI3UWi2203815 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL); Wed, 22 Oct 2025 13:03:30 -0500 Received: from DFLE215.ent.ti.com (10.64.6.73) by DFLE211.ent.ti.com (10.64.6.69) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.2562.20; Wed, 22 Oct 2025 13:03:30 -0500 Received: from lelvem-mr06.itg.ti.com (10.180.75.8) by DFLE215.ent.ti.com (10.64.6.73) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.2562.20 via Frontend Transport; Wed, 22 Oct 2025 13:03:30 -0500 Received: from pratham-Workstation-PC (pratham-workstation-pc.dhcp.ti.com [10.24.69.191]) by lelvem-mr06.itg.ti.com (8.18.1/8.18.1) with ESMTP id 59MI3Sdg1614027; Wed, 22 Oct 2025 13:03:29 -0500 From: T Pratham To: T Pratham , Herbert Xu , "David S. Miller" CC: Manorit Chawdhry , Kamlesh Gurudasani , Shiva Tripathi , Kavitha Malarvizhi , Vishal Mahaveer , Praneeth Bajjuri , , Subject: [PATCH v5 3/4] crypto: ti - Add support for AES-GCM in DTHEv2 driver Date: Wed, 22 Oct 2025 23:15:41 +0530 Message-ID: <20251022180302.729728-4-t-pratham@ti.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20251022180302.729728-1-t-pratham@ti.com> References: <20251022180302.729728-1-t-pratham@ti.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-C2ProcessedOrg: 333ef613-75bf-4e12-a4b1-8e3623f5dcea Content-Type: text/plain; charset="utf-8" AES-GCM is an AEAD algorithm supporting both encryption and authentication of data. This patch introduces support for AES-GCM as the first AEAD algorithm supported by the DTHEv2 driver. Signed-off-by: T Pratham --- drivers/crypto/ti/Kconfig | 2 + drivers/crypto/ti/dthev2-aes.c | 583 +++++++++++++++++++++++++++++- drivers/crypto/ti/dthev2-common.h | 9 +- 3 files changed, 592 insertions(+), 2 deletions(-) diff --git a/drivers/crypto/ti/Kconfig b/drivers/crypto/ti/Kconfig index 6027e12de279d..221e483737439 100644 --- a/drivers/crypto/ti/Kconfig +++ b/drivers/crypto/ti/Kconfig @@ -8,6 +8,8 @@ config CRYPTO_DEV_TI_DTHEV2 select CRYPTO_CBC select CRYPTO_CTR select CRYPTO_XTS + select CRYPTO_GCM + select SG_SPLIT help This enables support for the TI DTHE V2 hw cryptography engine which can be found on TI K3 SOCs. Selecting this enables use diff --git a/drivers/crypto/ti/dthev2-aes.c b/drivers/crypto/ti/dthev2-aes.c index bf55167f7871d..c63e3eac3346f 100644 --- a/drivers/crypto/ti/dthev2-aes.c +++ b/drivers/crypto/ti/dthev2-aes.c @@ -10,6 +10,7 @@ #include #include #include +#include #include #include =20 @@ -19,6 +20,7 @@ #include #include #include +#include #include =20 /* Registers */ @@ -53,6 +55,7 @@ #define DTHE_P_AES_C_LENGTH_1 0x0058 #define DTHE_P_AES_AUTH_LENGTH 0x005C #define DTHE_P_AES_DATA_IN_OUT 0x0060 +#define DTHE_P_AES_TAG_OUT 0x0070 =20 #define DTHE_P_AES_SYSCONFIG 0x0084 #define DTHE_P_AES_IRQSTATUS 0x008C @@ -65,6 +68,7 @@ enum aes_ctrl_mode_masks { AES_CTRL_CBC_MASK =3D BIT(5), AES_CTRL_CTR_MASK =3D BIT(6), AES_CTRL_XTS_MASK =3D BIT(12) | BIT(11), + AES_CTRL_GCM_MASK =3D BIT(17) | BIT(16) | BIT(6), }; =20 #define DTHE_AES_CTRL_MODE_CLEAR_MASK ~GENMASK(28, 5) @@ -91,6 +95,8 @@ enum aes_ctrl_mode_masks { #define AES_IV_SIZE AES_BLOCK_SIZE #define AES_BLOCK_WORDS (AES_BLOCK_SIZE / sizeof(u32)) #define AES_IV_WORDS AES_BLOCK_WORDS +#define DTHE_AES_GCM_AAD_MAXLEN (BIT_ULL(32) - 1) +#define POLL_TIMEOUT_INTERVAL HZ =20 static int dthe_cipher_init_tfm(struct crypto_skcipher *tfm) { @@ -255,6 +261,9 @@ static void dthe_aes_set_ctrl_key(struct dthe_tfm_ctx *= ctx, case DTHE_AES_XTS: ctrl_val |=3D AES_CTRL_XTS_MASK; break; + case DTHE_AES_GCM: + ctrl_val |=3D AES_CTRL_GCM_MASK; + break; } =20 if (iv_in) { @@ -513,6 +522,544 @@ static int dthe_aes_decrypt(struct skcipher_request *= req) return dthe_aes_crypt(req); } =20 +static int dthe_aead_init_tfm(struct crypto_aead *tfm) +{ + struct dthe_tfm_ctx *ctx =3D crypto_aead_ctx(tfm); + struct dthe_data *dev_data =3D dthe_get_dev(ctx); + + ctx->dev_data =3D dev_data; + + const char *alg_name =3D crypto_tfm_alg_name(crypto_aead_tfm(tfm)); + + ctx->aead_fb =3D crypto_alloc_sync_aead(alg_name, 0, + CRYPTO_ALG_NEED_FALLBACK); + if (IS_ERR(ctx->aead_fb)) { + dev_err(dev_data->dev, "fallback driver %s couldn't be loaded\n", + alg_name); + return PTR_ERR(ctx->aead_fb); + } + + return 0; +} + +static void dthe_aead_exit_tfm(struct crypto_aead *tfm) +{ + struct dthe_tfm_ctx *ctx =3D crypto_aead_ctx(tfm); + + crypto_free_sync_aead(ctx->aead_fb); +} + +/** + * dthe_aead_prep_src - Prepare source scatterlist for AEAD from input req= ->src + * @sg: Input req->src scatterlist + * @assoclen: Input req->assoclen + * @cryptlen: Input req->cryptlen (minus the size of TAG in decryption) + * @assoc_pad_buf: Buffer to hold AAD padding if needed + * @crypt_pad_buf: Buffer to hold ciphertext/plaintext padding if needed + * + * Description: + * For modes with authentication, DTHEv2 hardware requires the input AAD= and + * plaintext/ciphertext to be individually aligned to AES_BLOCK_SIZE. If= either is not + * aligned, it needs to be padded with zeros by the software before pass= ing the data to + * the hardware. However, linux crypto's aead_request provides the input= with AAD and + * plaintext/ciphertext contiguously appended together in a single scatt= erlist. + * + * This helper function takes the input scatterlist and splits it into s= eparate + * scatterlists for AAD and plaintext/ciphertext, ensuring each is align= ed to + * AES_BLOCK_SIZE by adding necessary padding, and then merges the align= ed scatterlists + * back into a single scatterlist for processing. + * + * Return: + * Pointer to the merged scatterlist, or ERR_PTR(error) on failure. + * The calling function needs to free the returned scatterlist when done. + **/ +static struct scatterlist *dthe_aead_prep_src(struct scatterlist *sg, + unsigned int assoclen, + unsigned int cryptlen, + u8 *assoc_pad_buf, + u8 *crypt_pad_buf) +{ + struct scatterlist *in_sg[2]; + struct scatterlist *to_sg; + struct scatterlist *src; + size_t split_sizes[2] =3D {assoclen, cryptlen}; + int out_mapped_nents[2]; + int crypt_nents =3D 0, assoc_nents =3D 0, src_nents =3D 0; + int err =3D 0; + + /* sg_split does not work properly if one of the split_sizes is 0 */ + if (cryptlen =3D=3D 0 || assoclen =3D=3D 0) { + /* + * Assigning both to sg does not matter as assoclen =3D 0 or cryptlen = =3D 0 + * being passed to dthe_copy_sg will take care to copy the sg correctly + */ + in_sg[0] =3D sg; + in_sg[1] =3D sg; + + src_nents =3D sg_nents_for_len(sg, assoclen + cryptlen); + } else { + err =3D sg_split(sg, 0, 0, 2, split_sizes, in_sg, out_mapped_nents, GFP_= KERNEL); + if (err) + goto dthe_aead_prep_src_split_err; + assoc_nents =3D sg_nents_for_len(in_sg[0], assoclen); + crypt_nents =3D sg_nents_for_len(in_sg[1], cryptlen); + + src_nents =3D assoc_nents + crypt_nents; + } + + if (assoclen % AES_BLOCK_SIZE) + src_nents++; + if (cryptlen % AES_BLOCK_SIZE) + src_nents++; + + src =3D kmalloc_array(src_nents, sizeof(struct scatterlist), GFP_KERNEL); + if (!src) { + err =3D -ENOMEM; + goto dthe_aead_prep_src_mem_err; + } + + sg_init_table(src, src_nents); + to_sg =3D src; + + to_sg =3D dthe_copy_sg(to_sg, in_sg[0], assoclen); + if (assoclen % AES_BLOCK_SIZE) { + unsigned int pad_len =3D AES_BLOCK_SIZE - (assoclen % AES_BLOCK_SIZE); + + sg_set_buf(to_sg, assoc_pad_buf, pad_len); + to_sg =3D sg_next(to_sg); + } + + to_sg =3D dthe_copy_sg(to_sg, in_sg[1], cryptlen); + if (cryptlen % AES_BLOCK_SIZE) { + unsigned int pad_len =3D AES_BLOCK_SIZE - (cryptlen % AES_BLOCK_SIZE); + + sg_set_buf(to_sg, crypt_pad_buf, pad_len); + to_sg =3D sg_next(to_sg); + } + +dthe_aead_prep_src_mem_err: + if (cryptlen !=3D 0 && assoclen !=3D 0) { + kfree(in_sg[0]); + kfree(in_sg[1]); + } + +dthe_aead_prep_src_split_err: + if (err) + return ERR_PTR(err); + return src; +} + +/** + * dthe_aead_prep_dst - Prepare destination scatterlist for AEAD from inpu= t req->dst + * @sg: Input req->dst scatterlist + * @assoclen: Input req->assoclen + * @cryptlen: Input req->cryptlen (minus the size of TAG in decryption) + * @pad_buf: Buffer to hold ciphertext/plaintext padding if needed + * + * Description: + * For modes with authentication, DTHEv2 hardware returns encrypted ciph= ertext/decrypted + * plaintext through DMA and TAG through MMRs. However, the dst scatterl= ist in linux + * crypto's aead_request is allocated same as input req->src scatterlist= . That is, it + * contains space for AAD in the beginning and ciphertext/plaintext at t= he end, with no + * alignment padding. This causes issues with DMA engine and DTHEv2 hard= ware. + * + * This helper function takes the output scatterlist and maps the part o= f the buffer + * which holds only the ciphertext/plaintext to a new scatterlist. It al= so adds a padding + * to align it with AES_BLOCK_SIZE. + * + * Return: + * Pointer to the trimmed scatterlist, or ERR_PTR(error) on failure. + * The calling function needs to free the returned scatterlist when done. + **/ +static struct scatterlist *dthe_aead_prep_dst(struct scatterlist *sg, + unsigned int assoclen, + unsigned int cryptlen, + u8 *pad_buf) +{ + struct scatterlist *out_sg[1]; + struct scatterlist *dst; + struct scatterlist *to_sg; + size_t split_sizes[1] =3D {cryptlen}; + int out_mapped_nents[1]; + int dst_nents =3D 0; + int err =3D 0; + + err =3D sg_split(sg, 0, assoclen, 1, split_sizes, out_sg, out_mapped_nent= s, GFP_KERNEL); + if (err) + goto dthe_aead_prep_dst_split_err; + + dst_nents =3D sg_nents_for_len(out_sg[0], cryptlen); + if (cryptlen % AES_BLOCK_SIZE) + dst_nents++; + + dst =3D kmalloc_array(dst_nents, sizeof(struct scatterlist), GFP_KERNEL); + if (!dst) { + err =3D -ENOMEM; + goto dthe_aead_prep_dst_mem_err; + } + sg_init_table(dst, dst_nents); + + to_sg =3D dthe_copy_sg(dst, out_sg[0], cryptlen); + if (cryptlen % AES_BLOCK_SIZE) { + unsigned int pad_len =3D AES_BLOCK_SIZE - (cryptlen % AES_BLOCK_SIZE); + + sg_set_buf(to_sg, pad_buf, pad_len); + to_sg =3D sg_next(to_sg); + } + +dthe_aead_prep_dst_mem_err: + kfree(out_sg[0]); + +dthe_aead_prep_dst_split_err: + if (err) + return ERR_PTR(err); + return dst; +} + +static int dthe_aead_read_tag(struct dthe_tfm_ctx *ctx, u32 *tag) +{ + struct dthe_data *dev_data =3D dthe_get_dev(ctx); + void __iomem *aes_base_reg =3D dev_data->regs + DTHE_P_AES_BASE; + u32 val; + int ret; + + ret =3D readl_relaxed_poll_timeout(aes_base_reg + DTHE_P_AES_CTRL, val, + (val & DTHE_AES_CTRL_SAVED_CTX_READY), + 0, POLL_TIMEOUT_INTERVAL); + if (ret) + return ret; + + for (int i =3D 0; i < AES_BLOCK_WORDS; ++i) + tag[i] =3D readl_relaxed(aes_base_reg + + DTHE_P_AES_TAG_OUT + + DTHE_REG_SIZE * i); + return 0; +} + +static int dthe_aead_enc_get_tag(struct aead_request *req) +{ + struct dthe_tfm_ctx *ctx =3D crypto_aead_ctx(crypto_aead_reqtfm(req)); + u32 tag[AES_BLOCK_WORDS]; + int nents; + int ret; + + ret =3D dthe_aead_read_tag(ctx, tag); + if (ret) + return ret; + + nents =3D sg_nents_for_len(req->dst, req->cryptlen + req->assoclen + ctx-= >authsize); + + sg_pcopy_from_buffer(req->dst, nents, tag, ctx->authsize, + req->assoclen + req->cryptlen); + + return 0; +} + +static int dthe_aead_dec_verify_tag(struct aead_request *req) +{ + struct dthe_tfm_ctx *ctx =3D crypto_aead_ctx(crypto_aead_reqtfm(req)); + u32 tag_out[AES_BLOCK_WORDS]; + u32 tag_in[AES_BLOCK_WORDS]; + int nents; + int ret; + + ret =3D dthe_aead_read_tag(ctx, tag_out); + if (ret) + return ret; + + nents =3D sg_nents_for_len(req->src, req->assoclen + req->cryptlen); + + sg_pcopy_to_buffer(req->src, nents, tag_in, ctx->authsize, + req->assoclen + req->cryptlen - ctx->authsize); + + if (memcmp(tag_in, tag_out, ctx->authsize)) + return -EBADMSG; + else + return 0; +} + +static int dthe_aead_setkey(struct crypto_aead *tfm, const u8 *key, unsign= ed int keylen) +{ + struct dthe_tfm_ctx *ctx =3D crypto_aead_ctx(tfm); + + if (keylen !=3D AES_KEYSIZE_128 && keylen !=3D AES_KEYSIZE_192 && keylen = !=3D AES_KEYSIZE_256) + return -EINVAL; + + ctx->aes_mode =3D DTHE_AES_GCM; + ctx->keylen =3D keylen; + memcpy(ctx->key, key, keylen); + + crypto_sync_aead_clear_flags(ctx->aead_fb, CRYPTO_TFM_REQ_MASK); + crypto_sync_aead_set_flags(ctx->aead_fb, + crypto_aead_get_flags(tfm) & + CRYPTO_TFM_REQ_MASK); + + return crypto_sync_aead_setkey(ctx->aead_fb, key, keylen); +} + +static int dthe_aead_setauthsize(struct crypto_aead *tfm, unsigned int aut= hsize) +{ + struct dthe_tfm_ctx *ctx =3D crypto_aead_ctx(tfm); + + /* Invalid auth size will be handled by crypto_aead_setauthsize() */ + ctx->authsize =3D authsize; + + return crypto_sync_aead_setauthsize(ctx->aead_fb, authsize); +} + +static void dthe_aead_dma_in_callback(void *data) +{ + struct aead_request *req =3D (struct aead_request *)data; + struct dthe_aes_req_ctx *rctx =3D aead_request_ctx(req); + + complete(&rctx->aes_compl); +} + +static int dthe_aead_run(struct crypto_engine *engine, void *areq) +{ + struct aead_request *req =3D container_of(areq, struct aead_request, base= ); + struct dthe_tfm_ctx *ctx =3D crypto_aead_ctx(crypto_aead_reqtfm(req)); + struct dthe_aes_req_ctx *rctx =3D aead_request_ctx(req); + struct dthe_data *dev_data =3D dthe_get_dev(ctx); + + unsigned int cryptlen =3D req->cryptlen; + unsigned int assoclen =3D req->assoclen; + unsigned int authsize =3D ctx->authsize; + unsigned int unpadded_cryptlen; + struct scatterlist *src =3D req->src; + struct scatterlist *dst =3D req->dst; + u32 iv_in[AES_IV_WORDS]; + + int src_nents; + int dst_nents; + int src_mapped_nents, dst_mapped_nents; + + u8 src_assoc_padbuf[AES_BLOCK_SIZE] =3D {0}; + u8 src_crypt_padbuf[AES_BLOCK_SIZE] =3D {0}; + u8 dst_crypt_padbuf[AES_BLOCK_SIZE] =3D {0}; + + enum dma_data_direction src_dir, dst_dir; + + struct device *tx_dev, *rx_dev; + struct dma_async_tx_descriptor *desc_in, *desc_out; + + int ret; + + void __iomem *aes_base_reg =3D dev_data->regs + DTHE_P_AES_BASE; + + u32 aes_irqenable_val =3D readl_relaxed(aes_base_reg + DTHE_P_AES_IRQENAB= LE); + u32 aes_sysconfig_val =3D readl_relaxed(aes_base_reg + DTHE_P_AES_SYSCONF= IG); + + aes_sysconfig_val |=3D DTHE_AES_SYSCONFIG_DMA_DATA_IN_OUT_EN; + writel_relaxed(aes_sysconfig_val, aes_base_reg + DTHE_P_AES_SYSCONFIG); + + aes_irqenable_val |=3D DTHE_AES_IRQENABLE_EN_ALL; + writel_relaxed(aes_irqenable_val, aes_base_reg + DTHE_P_AES_IRQENABLE); + + /* In decryption, the last authsize bytes are the TAG */ + if (!rctx->enc) + cryptlen -=3D authsize; + unpadded_cryptlen =3D cryptlen; + + /* Prep src and dst scatterlists */ + src =3D dthe_aead_prep_src(req->src, req->assoclen, cryptlen, + src_assoc_padbuf, src_crypt_padbuf); + if (IS_ERR(src)) { + ret =3D PTR_ERR(src); + goto aead_prep_src_err; + } + + if (req->assoclen % AES_BLOCK_SIZE) + assoclen +=3D AES_BLOCK_SIZE - (req->assoclen % AES_BLOCK_SIZE); + if (cryptlen % AES_BLOCK_SIZE) + cryptlen +=3D AES_BLOCK_SIZE - (cryptlen % AES_BLOCK_SIZE); + + src_nents =3D sg_nents_for_len(src, assoclen + cryptlen); + + if (cryptlen !=3D 0) { + dst =3D dthe_aead_prep_dst(req->dst, req->assoclen, unpadded_cryptlen, + dst_crypt_padbuf); + if (IS_ERR(dst)) { + ret =3D PTR_ERR(dst); + goto aead_prep_dst_err; + } + + dst_nents =3D sg_nents_for_len(dst, cryptlen); + } + /* Prep finished */ + + src_dir =3D DMA_TO_DEVICE; + dst_dir =3D DMA_FROM_DEVICE; + + tx_dev =3D dmaengine_get_dma_device(dev_data->dma_aes_tx); + rx_dev =3D dmaengine_get_dma_device(dev_data->dma_aes_rx); + + src_mapped_nents =3D dma_map_sg(tx_dev, src, src_nents, src_dir); + if (src_mapped_nents =3D=3D 0) { + ret =3D -EINVAL; + goto aead_dma_map_src_err; + } + + desc_out =3D dmaengine_prep_slave_sg(dev_data->dma_aes_tx, src, src_mappe= d_nents, + DMA_MEM_TO_DEV, DMA_PREP_INTERRUPT | DMA_CTRL_ACK); + if (!desc_out) { + ret =3D -EINVAL; + goto aead_dma_prep_src_err; + } + + desc_out->callback =3D dthe_aead_dma_in_callback; + desc_out->callback_param =3D req; + + if (cryptlen !=3D 0) { + dst_mapped_nents =3D dma_map_sg(rx_dev, dst, dst_nents, dst_dir); + if (dst_mapped_nents =3D=3D 0) { + ret =3D -EINVAL; + goto aead_dma_prep_src_err; + } + + desc_in =3D dmaengine_prep_slave_sg(dev_data->dma_aes_rx, dst, + dst_mapped_nents, DMA_DEV_TO_MEM, + DMA_PREP_INTERRUPT | DMA_CTRL_ACK); + if (!desc_in) { + ret =3D -EINVAL; + goto aead_dma_prep_dst_err; + } + } + + init_completion(&rctx->aes_compl); + + /* + * HACK: There is an unknown hw issue where if the previous operation had= alen =3D 0 and + * plen !=3D 0, the current operation's tag calculation is incorrect in t= he case where + * plen =3D 0 and alen !=3D 0 currently. This is a workaround for now whi= ch somehow works; + * by resetting the context by writing a 1 to the C_LENGTH_0 and AUTH_LEN= GTH registers. + */ + if (cryptlen =3D=3D 0) { + writel_relaxed(1, aes_base_reg + DTHE_P_AES_C_LENGTH_0); + writel_relaxed(1, aes_base_reg + DTHE_P_AES_AUTH_LENGTH); + } + + if (req->iv) { + memcpy(iv_in, req->iv, GCM_AES_IV_SIZE); + } else { + iv_in[0] =3D 0; + iv_in[1] =3D 0; + iv_in[2] =3D 0; + } + iv_in[3] =3D 0x01000000; + + /* Clear key2 to reset previous GHASH intermediate data */ + for (int i =3D 0; i < AES_KEYSIZE_256 / sizeof(u32); ++i) + writel_relaxed(0, aes_base_reg + DTHE_P_AES_KEY2_6 + DTHE_REG_SIZE * i); + + dthe_aes_set_ctrl_key(ctx, rctx, iv_in); + + writel_relaxed(lower_32_bits(unpadded_cryptlen), aes_base_reg + DTHE_P_AE= S_C_LENGTH_0); + writel_relaxed(upper_32_bits(unpadded_cryptlen), aes_base_reg + DTHE_P_AE= S_C_LENGTH_1); + writel_relaxed(req->assoclen, aes_base_reg + DTHE_P_AES_AUTH_LENGTH); + + if (cryptlen !=3D 0) + dmaengine_submit(desc_in); + dmaengine_submit(desc_out); + + if (cryptlen !=3D 0) + dma_async_issue_pending(dev_data->dma_aes_rx); + dma_async_issue_pending(dev_data->dma_aes_tx); + + /* Need to do timeout to ensure finalise gets called if DMA callback fail= s for any reason */ + ret =3D wait_for_completion_timeout(&rctx->aes_compl, msecs_to_jiffies(DT= HE_DMA_TIMEOUT_MS)); + if (!ret) { + ret =3D -ETIMEDOUT; + if (cryptlen !=3D 0) + dmaengine_terminate_sync(dev_data->dma_aes_rx); + dmaengine_terminate_sync(dev_data->dma_aes_tx); + + for (int i =3D 0; i < AES_BLOCK_WORDS; ++i) + readl_relaxed(aes_base_reg + DTHE_P_AES_DATA_IN_OUT + DTHE_REG_SIZE * i= ); + } else { + ret =3D 0; + } + + if (cryptlen !=3D 0) + dma_sync_sg_for_cpu(rx_dev, dst, dst_nents, dst_dir); + if (rctx->enc) + ret =3D dthe_aead_enc_get_tag(req); + else + ret =3D dthe_aead_dec_verify_tag(req); + +aead_dma_prep_dst_err: + if (cryptlen !=3D 0) + dma_unmap_sg(rx_dev, dst, dst_nents, dst_dir); +aead_dma_prep_src_err: + dma_unmap_sg(tx_dev, src, src_nents, src_dir); + +aead_dma_map_src_err: + if (cryptlen !=3D 0) + kfree(dst); + +aead_prep_dst_err: + kfree(src); + +aead_prep_src_err: + local_bh_disable(); + crypto_finalize_aead_request(engine, req, ret); + local_bh_enable(); + return 0; +} + +static int dthe_aead_crypt(struct aead_request *req) +{ + struct dthe_tfm_ctx *ctx =3D crypto_aead_ctx(crypto_aead_reqtfm(req)); + struct dthe_aes_req_ctx *rctx =3D aead_request_ctx(req); + struct dthe_data *dev_data =3D dthe_get_dev(ctx); + struct crypto_engine *engine; + unsigned int cryptlen =3D req->cryptlen; + + /* In decryption, last authsize bytes are the TAG */ + if (!rctx->enc) + cryptlen -=3D ctx->authsize; + + /* + * Need to fallback to software in the following cases due to HW restrict= ions: + * - Both AAD and plaintext/ciphertext are zero length + * - AAD length is more than 2^32 - 1 bytes + * PS: req->cryptlen is currently unsigned int type, which causes the abo= ve condition + * tautologically false. If req->cryptlen were to be changed to a 64-bit = type, + * the check for this would need to be added below. + */ + if (req->assoclen =3D=3D 0 && cryptlen =3D=3D 0) { + SYNC_AEAD_REQUEST_ON_STACK(subreq, ctx->aead_fb); + + aead_request_set_callback(subreq, aead_request_flags(req), + req->base.complete, req->base.data); + aead_request_set_crypt(subreq, req->src, req->dst, + req->cryptlen, req->iv); + aead_request_set_ad(subreq, req->assoclen); + + return rctx->enc ? crypto_aead_encrypt(subreq) : + crypto_aead_decrypt(subreq); + } + + engine =3D dev_data->engine; + return crypto_transfer_aead_request_to_engine(engine, req); +} + +static int dthe_aead_encrypt(struct aead_request *req) +{ + struct dthe_aes_req_ctx *rctx =3D aead_request_ctx(req); + + rctx->enc =3D 1; + return dthe_aead_crypt(req); +} + +static int dthe_aead_decrypt(struct aead_request *req) +{ + struct dthe_aes_req_ctx *rctx =3D aead_request_ctx(req); + + rctx->enc =3D 0; + return dthe_aead_crypt(req); +} + static struct skcipher_engine_alg cipher_algs[] =3D { { .base.init =3D dthe_cipher_init_tfm, @@ -610,12 +1157,46 @@ static struct skcipher_engine_alg cipher_algs[] =3D { }, /* XTS AES */ }; =20 +static struct aead_engine_alg aead_algs[] =3D { + { + .base.init =3D dthe_aead_init_tfm, + .base.exit =3D dthe_aead_exit_tfm, + .base.setkey =3D dthe_aead_setkey, + .base.setauthsize =3D dthe_aead_setauthsize, + .base.maxauthsize =3D AES_BLOCK_SIZE, + .base.encrypt =3D dthe_aead_encrypt, + .base.decrypt =3D dthe_aead_decrypt, + .base.chunksize =3D AES_BLOCK_SIZE, + .base.ivsize =3D GCM_AES_IV_SIZE, + .base.base =3D { + .cra_name =3D "gcm(aes)", + .cra_driver_name =3D "gcm-aes-dthev2", + .cra_priority =3D 299, + .cra_flags =3D CRYPTO_ALG_TYPE_AEAD | + CRYPTO_ALG_KERN_DRIVER_ONLY | + CRYPTO_ALG_ASYNC | + CRYPTO_ALG_NEED_FALLBACK, + .cra_blocksize =3D 1, + .cra_ctxsize =3D sizeof(struct dthe_tfm_ctx), + .cra_reqsize =3D sizeof(struct dthe_aes_req_ctx), + .cra_module =3D THIS_MODULE, + }, + .op.do_one_request =3D dthe_aead_run, + }, /* GCM AES */ +}; + int dthe_register_aes_algs(void) { - return crypto_engine_register_skciphers(cipher_algs, ARRAY_SIZE(cipher_al= gs)); + int ret =3D 0; + + ret |=3D crypto_engine_register_skciphers(cipher_algs, ARRAY_SIZE(cipher_= algs)); + ret |=3D crypto_engine_register_aeads(aead_algs, ARRAY_SIZE(aead_algs)); + + return ret; } =20 void dthe_unregister_aes_algs(void) { crypto_engine_unregister_skciphers(cipher_algs, ARRAY_SIZE(cipher_algs)); + crypto_engine_unregister_aeads(aead_algs, ARRAY_SIZE(aead_algs)); } diff --git a/drivers/crypto/ti/dthev2-common.h b/drivers/crypto/ti/dthev2-c= ommon.h index f12b94d64e134..7c54291359bf5 100644 --- a/drivers/crypto/ti/dthev2-common.h +++ b/drivers/crypto/ti/dthev2-common.h @@ -38,6 +38,7 @@ enum dthe_aes_mode { DTHE_AES_CBC, DTHE_AES_CTR, DTHE_AES_XTS, + DTHE_AES_GCM, }; =20 /* Driver specific struct definitions */ @@ -78,16 +79,22 @@ struct dthe_list { * struct dthe_tfm_ctx - Transform ctx struct containing ctx for all sub-c= omponents of DTHE V2 * @dev_data: Device data struct pointer * @keylen: AES key length + * @authsize: Authentication size for modes with authentication * @key: AES key * @aes_mode: AES mode + * @aead_fb: Fallback crypto aead handle * @skcipher_fb: Fallback crypto skcipher handle for AES-XTS mode */ struct dthe_tfm_ctx { struct dthe_data *dev_data; unsigned int keylen; + unsigned int authsize; u32 key[DTHE_MAX_KEYSIZE / sizeof(u32)]; enum dthe_aes_mode aes_mode; - struct crypto_sync_skcipher *skcipher_fb; + union { + struct crypto_sync_aead *aead_fb; + struct crypto_sync_skcipher *skcipher_fb; + }; }; =20 /** --=20 2.43.0 From nobody Sun Feb 8 16:12:43 2026 Received: from lelvem-ot01.ext.ti.com (lelvem-ot01.ext.ti.com [198.47.23.234]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 1169E2FE048; Wed, 22 Oct 2025 18:03:37 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=198.47.23.234 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761156220; cv=none; b=KtBm45VUE2B/12r1KXCWbJv5j+QfmfZwUToS65Ye2rpwKDlhXVleIuxnqCEbWqi1Hxl4jOrsBBUlDyNajhAll5Jh2N7qMpepOMqhRftNRs+OZLQlXOqxEH0Wv1wKz003AxXJiQVDuMymMZdMjsBaO88riUeWR7tBsJPafsgEtWY= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1761156220; c=relaxed/simple; bh=CWWvWlCmp5lnV2QSgssL2GNmcICNuWiZP5idG/Ndsl8=; h=From:To:CC:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version:Content-Type; b=l+lbVuzezSdISyqmpnc4OUufJQEW37RlVLrCouomnmcyxYWSZQhDkvlKEsCAM2Q+TVG4XC+efaEyFVScwyDZdG2W/4k/Pc88UvtsblDz1yis5RdOv+8ZDsjIgk03PJMrMxR4P0F/H1tckEjT2rlB+ux+AcEYU6qYECR3/MGljsc= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=ti.com; spf=pass smtp.mailfrom=ti.com; dkim=pass (1024-bit key) header.d=ti.com header.i=@ti.com header.b=tBw/B7pb; arc=none smtp.client-ip=198.47.23.234 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=quarantine dis=none) header.from=ti.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=ti.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=ti.com header.i=@ti.com header.b="tBw/B7pb" Received: from lelvem-sh01.itg.ti.com ([10.180.77.71]) by lelvem-ot01.ext.ti.com (8.15.2/8.15.2) with ESMTP id 59MI3YVX1477225; Wed, 22 Oct 2025 13:03:34 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ti.com; s=ti-com-17Q1; t=1761156214; bh=PKC599RkvKtLxFCsJ46IgIo6hP4F2Yr27hBp84tdG3U=; h=From:To:CC:Subject:Date:In-Reply-To:References; b=tBw/B7pbARoch3Fapbn4ThGvnGQKPvXEHsDI2quoAxk2CsqJCMnWBrEOivbV63XzT PGut2jH2y39LQ4mWPdMkl+92dWm3Mjl2n73UC/YgxYh7vsVTkuKtx1cZde/oIxVj18 oSzW9Mnj1zN1Gf9pOP3g4X5IpeDvkqZQJZf7FNLw= Received: from DLEE205.ent.ti.com (dlee205.ent.ti.com [157.170.170.85]) by lelvem-sh01.itg.ti.com (8.18.1/8.18.1) with ESMTPS id 59MI3Y2d2112080 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL); Wed, 22 Oct 2025 13:03:34 -0500 Received: from DLEE201.ent.ti.com (157.170.170.76) by DLEE205.ent.ti.com (157.170.170.85) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.2562.20; Wed, 22 Oct 2025 13:03:33 -0500 Received: from lelvem-mr06.itg.ti.com (10.180.75.8) by DLEE201.ent.ti.com (157.170.170.76) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.2562.20 via Frontend Transport; Wed, 22 Oct 2025 13:03:33 -0500 Received: from pratham-Workstation-PC (pratham-workstation-pc.dhcp.ti.com [10.24.69.191]) by lelvem-mr06.itg.ti.com (8.18.1/8.18.1) with ESMTP id 59MI3WT71614080; Wed, 22 Oct 2025 13:03:33 -0500 From: T Pratham To: T Pratham , Herbert Xu , "David S. Miller" CC: Manorit Chawdhry , Kamlesh Gurudasani , Shiva Tripathi , Kavitha Malarvizhi , Vishal Mahaveer , Praneeth Bajjuri , , Subject: [PATCH v5 4/4] crypto: ti - Add support for AES-CCM in DTHEv2 driver Date: Wed, 22 Oct 2025 23:15:42 +0530 Message-ID: <20251022180302.729728-5-t-pratham@ti.com> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20251022180302.729728-1-t-pratham@ti.com> References: <20251022180302.729728-1-t-pratham@ti.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-C2ProcessedOrg: 333ef613-75bf-4e12-a4b1-8e3623f5dcea Content-Type: text/plain; charset="utf-8" AES-CCM is an AEAD algorithm supporting both encryption and authentication of data. This patch introduces support for AES-CCM AEAD algorithm in the DTHEv2 driver. Signed-off-by: T Pratham --- drivers/crypto/ti/Kconfig | 1 + drivers/crypto/ti/dthev2-aes.c | 129 ++++++++++++++++++++++++++---- drivers/crypto/ti/dthev2-common.h | 1 + 3 files changed, 115 insertions(+), 16 deletions(-) diff --git a/drivers/crypto/ti/Kconfig b/drivers/crypto/ti/Kconfig index 221e483737439..1a3a571ac8cef 100644 --- a/drivers/crypto/ti/Kconfig +++ b/drivers/crypto/ti/Kconfig @@ -9,6 +9,7 @@ config CRYPTO_DEV_TI_DTHEV2 select CRYPTO_CTR select CRYPTO_XTS select CRYPTO_GCM + select CRYPTO_CCM select SG_SPLIT help This enables support for the TI DTHE V2 hw cryptography engine diff --git a/drivers/crypto/ti/dthev2-aes.c b/drivers/crypto/ti/dthev2-aes.c index c63e3eac3346f..8c627539cf5c8 100644 --- a/drivers/crypto/ti/dthev2-aes.c +++ b/drivers/crypto/ti/dthev2-aes.c @@ -16,6 +16,7 @@ =20 #include "dthev2-common.h" =20 +#include #include #include #include @@ -69,6 +70,7 @@ enum aes_ctrl_mode_masks { AES_CTRL_CTR_MASK =3D BIT(6), AES_CTRL_XTS_MASK =3D BIT(12) | BIT(11), AES_CTRL_GCM_MASK =3D BIT(17) | BIT(16) | BIT(6), + AES_CTRL_CCM_MASK =3D BIT(18) | BIT(6), }; =20 #define DTHE_AES_CTRL_MODE_CLEAR_MASK ~GENMASK(28, 5) @@ -81,6 +83,11 @@ enum aes_ctrl_mode_masks { =20 #define DTHE_AES_CTRL_CTR_WIDTH_128B (BIT(7) | BIT(8)) =20 +#define DTHE_AES_CCM_L_FROM_IV_MASK GENMASK(2, 0) +#define DTHE_AES_CCM_M_BITS GENMASK(2, 0) +#define DTHE_AES_CTRL_CCM_L_FIELD_MASK GENMASK(21, 19) +#define DTHE_AES_CTRL_CCM_M_FIELD_MASK GENMASK(24, 22) + #define DTHE_AES_CTRL_SAVE_CTX_SET BIT(29) =20 #define DTHE_AES_CTRL_OUTPUT_READY BIT_MASK(0) @@ -96,6 +103,8 @@ enum aes_ctrl_mode_masks { #define AES_BLOCK_WORDS (AES_BLOCK_SIZE / sizeof(u32)) #define AES_IV_WORDS AES_BLOCK_WORDS #define DTHE_AES_GCM_AAD_MAXLEN (BIT_ULL(32) - 1) +#define DTHE_AES_CCM_AAD_MAXLEN (BIT(16) - BIT(8)) +#define DTHE_AES_CCM_CRYPT_MAXLEN (BIT_ULL(61) - 1) #define POLL_TIMEOUT_INTERVAL HZ =20 static int dthe_cipher_init_tfm(struct crypto_skcipher *tfm) @@ -264,6 +273,13 @@ static void dthe_aes_set_ctrl_key(struct dthe_tfm_ctx = *ctx, case DTHE_AES_GCM: ctrl_val |=3D AES_CTRL_GCM_MASK; break; + case DTHE_AES_CCM: + ctrl_val |=3D AES_CTRL_CCM_MASK; + ctrl_val |=3D FIELD_PREP(DTHE_AES_CTRL_CCM_L_FIELD_MASK, + (iv_in[0] & DTHE_AES_CCM_L_FROM_IV_MASK)); + ctrl_val |=3D FIELD_PREP(DTHE_AES_CTRL_CCM_M_FIELD_MASK, + ((ctx->authsize - 2) >> 1) & DTHE_AES_CCM_M_BITS); + break; } =20 if (iv_in) { @@ -785,10 +801,6 @@ static int dthe_aead_setkey(struct crypto_aead *tfm, c= onst u8 *key, unsigned int if (keylen !=3D AES_KEYSIZE_128 && keylen !=3D AES_KEYSIZE_192 && keylen = !=3D AES_KEYSIZE_256) return -EINVAL; =20 - ctx->aes_mode =3D DTHE_AES_GCM; - ctx->keylen =3D keylen; - memcpy(ctx->key, key, keylen); - crypto_sync_aead_clear_flags(ctx->aead_fb, CRYPTO_TFM_REQ_MASK); crypto_sync_aead_set_flags(ctx->aead_fb, crypto_aead_get_flags(tfm) & @@ -797,6 +809,28 @@ static int dthe_aead_setkey(struct crypto_aead *tfm, c= onst u8 *key, unsigned int return crypto_sync_aead_setkey(ctx->aead_fb, key, keylen); } =20 +static int dthe_gcm_aes_setkey(struct crypto_aead *tfm, const u8 *key, uns= igned int keylen) +{ + struct dthe_tfm_ctx *ctx =3D crypto_aead_ctx(tfm); + + ctx->aes_mode =3D DTHE_AES_GCM; + ctx->keylen =3D keylen; + memcpy(ctx->key, key, keylen); + + return dthe_aead_setkey(tfm, key, keylen); +} + +static int dthe_ccm_aes_setkey(struct crypto_aead *tfm, const u8 *key, uns= igned int keylen) +{ + struct dthe_tfm_ctx *ctx =3D crypto_aead_ctx(tfm); + + ctx->aes_mode =3D DTHE_AES_CCM; + ctx->keylen =3D keylen; + memcpy(ctx->key, key, keylen); + + return dthe_aead_setkey(tfm, key, keylen); +} + static int dthe_aead_setauthsize(struct crypto_aead *tfm, unsigned int aut= hsize) { struct dthe_tfm_ctx *ctx =3D crypto_aead_ctx(tfm); @@ -939,14 +973,18 @@ static int dthe_aead_run(struct crypto_engine *engine= , void *areq) writel_relaxed(1, aes_base_reg + DTHE_P_AES_AUTH_LENGTH); } =20 - if (req->iv) { - memcpy(iv_in, req->iv, GCM_AES_IV_SIZE); + if (ctx->aes_mode =3D=3D DTHE_AES_GCM) { + if (req->iv) { + memcpy(iv_in, req->iv, GCM_AES_IV_SIZE); + } else { + iv_in[0] =3D 0; + iv_in[1] =3D 0; + iv_in[2] =3D 0; + } + iv_in[3] =3D 0x01000000; } else { - iv_in[0] =3D 0; - iv_in[1] =3D 0; - iv_in[2] =3D 0; + memcpy(iv_in, req->iv, AES_IV_SIZE); } - iv_in[3] =3D 0x01000000; =20 /* Clear key2 to reset previous GHASH intermediate data */ for (int i =3D 0; i < AES_KEYSIZE_256 / sizeof(u32); ++i) @@ -1014,20 +1052,54 @@ static int dthe_aead_crypt(struct aead_request *req) struct dthe_data *dev_data =3D dthe_get_dev(ctx); struct crypto_engine *engine; unsigned int cryptlen =3D req->cryptlen; + bool is_zero_ctr =3D true; =20 /* In decryption, last authsize bytes are the TAG */ if (!rctx->enc) cryptlen -=3D ctx->authsize; =20 + if (ctx->aes_mode =3D=3D DTHE_AES_CCM) { + /* + * For CCM Mode, the 128-bit IV contains the following: + * | 0 .. 2 | 3 .. 7 | 8 .. (127-8*L) | (128-8*L) .. 127 | + * | L-1 | Zero | Nonce | Counter | + * L needs to be between 2-8 (inclusive), i.e. 1 <=3D (L-1) <=3D 7 + * and the next 5 bits need to be zeroes. Else return -EINVAL + */ + u8 *iv =3D req->iv; + u8 L =3D iv[0]; + + if (L < 1 || L > 7) + return -EINVAL; + /* + * DTHEv2 HW can only work with zero initial counter in CCM mode. + * Check if the initial counter value is zero or not + */ + for (int i =3D 0; i < L + 1; ++i) { + if (iv[AES_IV_SIZE - 1 - i] !=3D 0) { + is_zero_ctr =3D false; + break; + } + } + } + /* * Need to fallback to software in the following cases due to HW restrict= ions: * - Both AAD and plaintext/ciphertext are zero length - * - AAD length is more than 2^32 - 1 bytes - * PS: req->cryptlen is currently unsigned int type, which causes the abo= ve condition - * tautologically false. If req->cryptlen were to be changed to a 64-bit = type, - * the check for this would need to be added below. + * - For AES-GCM, AAD length is more than 2^32 - 1 bytes + * - For AES-CCM, AAD length is more than 2^16 - 2^8 bytes + * - For AES-CCM, plaintext/ciphertext length is more than 2^61 - 1 bytes + * - For AES-CCM, AAD length is non-zero but plaintext/ciphertext length = is zero + * - For AES-CCM, the initial counter (last L+1 bytes of IV) is not all z= eroes + * + * PS: req->cryptlen is currently unsigned int type, which causes the sec= ond and fourth + * cases above tautologically false. If req->cryptlen is to be changed to= a 64-bit + * type, the check for these would also need to be added below. */ - if (req->assoclen =3D=3D 0 && cryptlen =3D=3D 0) { + if ((req->assoclen =3D=3D 0 && cryptlen =3D=3D 0) || + (ctx->aes_mode =3D=3D DTHE_AES_CCM && req->assoclen > DTHE_AES_CCM_AA= D_MAXLEN) || + (ctx->aes_mode =3D=3D DTHE_AES_CCM && cryptlen =3D=3D 0) || + (ctx->aes_mode =3D=3D DTHE_AES_CCM && !is_zero_ctr)) { SYNC_AEAD_REQUEST_ON_STACK(subreq, ctx->aead_fb); =20 aead_request_set_callback(subreq, aead_request_flags(req), @@ -1161,7 +1233,7 @@ static struct aead_engine_alg aead_algs[] =3D { { .base.init =3D dthe_aead_init_tfm, .base.exit =3D dthe_aead_exit_tfm, - .base.setkey =3D dthe_aead_setkey, + .base.setkey =3D dthe_gcm_aes_setkey, .base.setauthsize =3D dthe_aead_setauthsize, .base.maxauthsize =3D AES_BLOCK_SIZE, .base.encrypt =3D dthe_aead_encrypt, @@ -1183,6 +1255,31 @@ static struct aead_engine_alg aead_algs[] =3D { }, .op.do_one_request =3D dthe_aead_run, }, /* GCM AES */ + { + .base.init =3D dthe_aead_init_tfm, + .base.exit =3D dthe_aead_exit_tfm, + .base.setkey =3D dthe_ccm_aes_setkey, + .base.setauthsize =3D dthe_aead_setauthsize, + .base.maxauthsize =3D AES_BLOCK_SIZE, + .base.encrypt =3D dthe_aead_encrypt, + .base.decrypt =3D dthe_aead_decrypt, + .base.chunksize =3D AES_BLOCK_SIZE, + .base.ivsize =3D AES_IV_SIZE, + .base.base =3D { + .cra_name =3D "ccm(aes)", + .cra_driver_name =3D "ccm-aes-dthev2", + .cra_priority =3D 299, + .cra_flags =3D CRYPTO_ALG_TYPE_AEAD | + CRYPTO_ALG_KERN_DRIVER_ONLY | + CRYPTO_ALG_ASYNC | + CRYPTO_ALG_NEED_FALLBACK, + .cra_blocksize =3D 1, + .cra_ctxsize =3D sizeof(struct dthe_tfm_ctx), + .cra_reqsize =3D sizeof(struct dthe_aes_req_ctx), + .cra_module =3D THIS_MODULE, + }, + .op.do_one_request =3D dthe_aead_run, + }, /* CCM AES */ }; =20 int dthe_register_aes_algs(void) diff --git a/drivers/crypto/ti/dthev2-common.h b/drivers/crypto/ti/dthev2-c= ommon.h index 7c54291359bf5..3b8d30b3408a0 100644 --- a/drivers/crypto/ti/dthev2-common.h +++ b/drivers/crypto/ti/dthev2-common.h @@ -39,6 +39,7 @@ enum dthe_aes_mode { DTHE_AES_CTR, DTHE_AES_XTS, DTHE_AES_GCM, + DTHE_AES_CCM, }; =20 /* Driver specific struct definitions */ --=20 2.43.0