From nobody Thu Apr 2 05:50:12 2026 Received: from smtpout-02.galae.net (smtpout-02.galae.net [185.246.84.56]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id CDAE73BAD8D; Mon, 30 Mar 2026 10:28:29 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=185.246.84.56 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774866511; cv=none; b=qakoDjfj3arWg4vJjbleWUNfLVuUaPpSH5p0tDMWBMGH5VPau+kuEwJjyvyyHI/xy5oDyGoHEsq1fBvyWeh3RaoobL2b3gW7z9nbrF77q42Fl1Y35P8DD8ZNmoTwvw/0BmbIMgD4h5kiJOd7EMmS+vOxu2mAWJOm6WpqThD+s9s= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1774866511; c=relaxed/simple; bh=/EnF7zzHvHKIR8HcZXC5C8lFk4yhV9o+meAF9FuE2Ec=; h=From:To:Cc:Subject:Date:Message-ID:In-Reply-To:References: MIME-Version; b=G4SqN8hPYnFLTAvX39u7fb2rS8InL0GQNeR7fXw2R7Vr8LvRZpwKR7rda77LcAs61GN4bldr1QsU960MI1gUreQlqYzMaZSt0McGO/CniPJMukwq31yC3ZLvy2XgnndW7KnULa+XuHHTDvV0PTs2QwCza+HqcxnPCdU6JSarnik= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=bootlin.com; spf=pass smtp.mailfrom=bootlin.com; dkim=pass (2048-bit key) header.d=bootlin.com header.i=@bootlin.com header.b=iqdRRjHx; arc=none smtp.client-ip=185.246.84.56 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=reject dis=none) header.from=bootlin.com Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=bootlin.com Authentication-Results: smtp.subspace.kernel.org; dkim=pass (2048-bit key) header.d=bootlin.com header.i=@bootlin.com header.b="iqdRRjHx" Received: from smtpout-01.galae.net (smtpout-01.galae.net [212.83.139.233]) by smtpout-02.galae.net (Postfix) with ESMTPS id 758CE1A308B; Mon, 30 Mar 2026 10:28:28 +0000 (UTC) Received: from mail.galae.net (mail.galae.net [212.83.136.155]) by smtpout-01.galae.net (Postfix) with ESMTPS id 44A385FFA8; Mon, 30 Mar 2026 10:28:28 +0000 (UTC) Received: from [127.0.0.1] (localhost [127.0.0.1]) by localhost (Mailerdaemon) with ESMTPSA id C79EF10450D88; Mon, 30 Mar 2026 12:28:26 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bootlin.com; s=dkim; t=1774866507; h=from:subject:date:message-id:to:cc:mime-version: content-transfer-encoding:in-reply-to:references; bh=QQ/1+PnKcQePxuF+Q0gRI39v2TXUCps1lJ0+wAIc7nY=; b=iqdRRjHxAirGpnVUteqvFGDJhUlJ/+8XKXIt9uQKiqnvjGT1rm7XEn21cDjA76BQUpayER xvL9IciYoo12ObsTzL3AF4qLOlUaaCxVak0aIHCJs5j1ETEJfyxQCpsk6daSYm8/xZmUsu 9LtTj7uNIswMDNMzi9Y63B//yFma+bJqaN2rbjVJM9gQxas2CTFDhKgZEzfxNggOmQayKb 8ptNE2c/zI+FL/uoS9vXUdo8e9fJtarPDZlqZLlCoTsOb5+OC5W7fSpzuhd3EL2sC15q14 wTGtYrUWbM4Aq7RSvL3iZZ2VaaFyLljXahun5z2Hw2tBfSOtWe3ScmV37ljUKg== From: Paul Louvel To: Herbert Xu , "David S. Miller" , Paolo Abeni , David Howells Cc: Paul Louvel , linux-crypto@vger.kernel.org, linux-kernel@vger.kernel.org, stable@vger.kernel.org, Thomas Petazzoni , Herve Codina Subject: [PATCH 1/2] crypto: talitos - fix SEC1 32k ahash request limitation Date: Mon, 30 Mar 2026 12:28:18 +0200 Message-ID: <20260330102820.29914-2-paul.louvel@bootlin.com> X-Mailer: git-send-email 2.53.0 In-Reply-To: <20260330102820.29914-1-paul.louvel@bootlin.com> References: <20260330102820.29914-1-paul.louvel@bootlin.com> Precedence: bulk X-Mailing-List: linux-kernel@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Last-TLS-Session-Version: TLSv1.3 Content-Type: text/plain; charset="utf-8" Since commit c662b043cdca ("crypto: af_alg/hash: Support MSG_SPLICE_PAGES"), the crypto core may pass large scatterlists spanning multiple pages to drivers supporting ahash operations. As a result, a driver can now receive large ahash requests. The SEC1 engine has a limitation where a single descriptor cannot process more than 32k of data. The current implementation attempts to handle the entire request within a single descriptor, which leads to failures raised by the driver: "length exceeds h/w max limit" Address this limitation by splitting large ahash requests into multiple descriptors, each respecting the 32k hardware limit. This allows processing arbitrarily large requests. Cc: stable@vger.kernel.org Fixes: c662b043cdca ("crypto: af_alg/hash: Support MSG_SPLICE_PAGES") Signed-off-by: Paul Louvel --- drivers/crypto/talitos.c | 216 ++++++++++++++++++++++++++------------- 1 file changed, 147 insertions(+), 69 deletions(-) diff --git a/drivers/crypto/talitos.c b/drivers/crypto/talitos.c index e8c0db687c57..4c325fa0eac1 100644 --- a/drivers/crypto/talitos.c +++ b/drivers/crypto/talitos.c @@ -12,6 +12,7 @@ * All rights reserved. */ =20 +#include #include #include #include @@ -870,10 +871,18 @@ struct talitos_ahash_req_ctx { unsigned int swinit; unsigned int first; unsigned int last; + unsigned int last_request; unsigned int to_hash_later; unsigned int nbuf; struct scatterlist bufsl[2]; struct scatterlist *psrc; + + struct scatterlist request_bufsl[2]; + struct ahash_request *areq; + struct scatterlist *request_sl; + unsigned int remaining_ahash_request_bytes; + unsigned int current_ahash_request_bytes; + struct work_struct sec1_ahash_process_remaining; }; =20 struct talitos_export_state { @@ -1759,7 +1768,20 @@ static void ahash_done(struct device *dev, =20 kfree(edesc); =20 - ahash_request_complete(areq, err); + if (err) { + ahash_request_complete(areq, err); + return; + } + + req_ctx->remaining_ahash_request_bytes -=3D + req_ctx->current_ahash_request_bytes; + + if (!req_ctx->remaining_ahash_request_bytes) { + ahash_request_complete(areq, 0); + return; + } + + schedule_work(&req_ctx->sec1_ahash_process_remaining); } =20 /* @@ -1925,60 +1947,7 @@ static struct talitos_edesc *ahash_edesc_alloc(struc= t ahash_request *areq, nbytes, 0, 0, 0, areq->base.flags, false); } =20 -static int ahash_init(struct ahash_request *areq) -{ - struct crypto_ahash *tfm =3D crypto_ahash_reqtfm(areq); - struct talitos_ctx *ctx =3D crypto_ahash_ctx(tfm); - struct device *dev =3D ctx->dev; - struct talitos_ahash_req_ctx *req_ctx =3D ahash_request_ctx(areq); - unsigned int size; - dma_addr_t dma; - - /* Initialize the context */ - req_ctx->buf_idx =3D 0; - req_ctx->nbuf =3D 0; - req_ctx->first =3D 1; /* first indicates h/w must init its context */ - req_ctx->swinit =3D 0; /* assume h/w init of context */ - size =3D (crypto_ahash_digestsize(tfm) <=3D SHA256_DIGEST_SIZE) - ? TALITOS_MDEU_CONTEXT_SIZE_MD5_SHA1_SHA256 - : TALITOS_MDEU_CONTEXT_SIZE_SHA384_SHA512; - req_ctx->hw_context_size =3D size; - - dma =3D dma_map_single(dev, req_ctx->hw_context, req_ctx->hw_context_size, - DMA_TO_DEVICE); - dma_unmap_single(dev, dma, req_ctx->hw_context_size, DMA_TO_DEVICE); - - return 0; -} - -/* - * on h/w without explicit sha224 support, we initialize h/w context - * manually with sha224 constants, and tell it to run sha256. - */ -static int ahash_init_sha224_swinit(struct ahash_request *areq) -{ - struct talitos_ahash_req_ctx *req_ctx =3D ahash_request_ctx(areq); - - req_ctx->hw_context[0] =3D SHA224_H0; - req_ctx->hw_context[1] =3D SHA224_H1; - req_ctx->hw_context[2] =3D SHA224_H2; - req_ctx->hw_context[3] =3D SHA224_H3; - req_ctx->hw_context[4] =3D SHA224_H4; - req_ctx->hw_context[5] =3D SHA224_H5; - req_ctx->hw_context[6] =3D SHA224_H6; - req_ctx->hw_context[7] =3D SHA224_H7; - - /* init 64-bit count */ - req_ctx->hw_context[8] =3D 0; - req_ctx->hw_context[9] =3D 0; - - ahash_init(areq); - req_ctx->swinit =3D 1;/* prevent h/w initting context with sha256 values*/ - - return 0; -} - -static int ahash_process_req(struct ahash_request *areq, unsigned int nbyt= es) +static int ahash_process_req_one(struct ahash_request *areq, unsigned int = nbytes) { struct crypto_ahash *tfm =3D crypto_ahash_reqtfm(areq); struct talitos_ctx *ctx =3D crypto_ahash_ctx(tfm); @@ -1997,12 +1966,12 @@ static int ahash_process_req(struct ahash_request *= areq, unsigned int nbytes) =20 if (!req_ctx->last && (nbytes + req_ctx->nbuf <=3D blocksize)) { /* Buffer up to one whole block */ - nents =3D sg_nents_for_len(areq->src, nbytes); + nents =3D sg_nents_for_len(req_ctx->request_sl, nbytes); if (nents < 0) { dev_err(dev, "Invalid number of src SG.\n"); return nents; } - sg_copy_to_buffer(areq->src, nents, + sg_copy_to_buffer(req_ctx->request_sl, nents, ctx_buf + req_ctx->nbuf, nbytes); req_ctx->nbuf +=3D nbytes; return 0; @@ -2029,7 +1998,7 @@ static int ahash_process_req(struct ahash_request *ar= eq, unsigned int nbytes) sg_init_table(req_ctx->bufsl, nsg); sg_set_buf(req_ctx->bufsl, ctx_buf, req_ctx->nbuf); if (nsg > 1) - sg_chain(req_ctx->bufsl, 2, areq->src); + sg_chain(req_ctx->bufsl, 2, req_ctx->request_sl); req_ctx->psrc =3D req_ctx->bufsl; } else if (is_sec1 && req_ctx->nbuf && req_ctx->nbuf < blocksize) { int offset; @@ -2038,26 +2007,26 @@ static int ahash_process_req(struct ahash_request *= areq, unsigned int nbytes) offset =3D blocksize - req_ctx->nbuf; else offset =3D nbytes_to_hash - req_ctx->nbuf; - nents =3D sg_nents_for_len(areq->src, offset); + nents =3D sg_nents_for_len(req_ctx->request_sl, offset); if (nents < 0) { dev_err(dev, "Invalid number of src SG.\n"); return nents; } - sg_copy_to_buffer(areq->src, nents, + sg_copy_to_buffer(req_ctx->request_sl, nents, ctx_buf + req_ctx->nbuf, offset); req_ctx->nbuf +=3D offset; - req_ctx->psrc =3D scatterwalk_ffwd(req_ctx->bufsl, areq->src, + req_ctx->psrc =3D scatterwalk_ffwd(req_ctx->bufsl, req_ctx->request_sl, offset); } else - req_ctx->psrc =3D areq->src; + req_ctx->psrc =3D req_ctx->request_sl; =20 if (to_hash_later) { - nents =3D sg_nents_for_len(areq->src, nbytes); + nents =3D sg_nents_for_len(req_ctx->request_sl, nbytes); if (nents < 0) { dev_err(dev, "Invalid number of src SG.\n"); return nents; } - sg_pcopy_to_buffer(areq->src, nents, + sg_pcopy_to_buffer(req_ctx->request_sl, nents, req_ctx->buf[(req_ctx->buf_idx + 1) & 1], to_hash_later, nbytes - to_hash_later); @@ -2065,7 +2034,7 @@ static int ahash_process_req(struct ahash_request *ar= eq, unsigned int nbytes) req_ctx->to_hash_later =3D to_hash_later; =20 /* Allocate extended descriptor */ - edesc =3D ahash_edesc_alloc(areq, nbytes_to_hash); + edesc =3D ahash_edesc_alloc(req_ctx->areq, nbytes_to_hash); if (IS_ERR(edesc)) return PTR_ERR(edesc); =20 @@ -2087,14 +2056,123 @@ static int ahash_process_req(struct ahash_request = *areq, unsigned int nbytes) if (ctx->keylen && (req_ctx->first || req_ctx->last)) edesc->desc.hdr |=3D DESC_HDR_MODE0_MDEU_HMAC; =20 - return common_nonsnoop_hash(edesc, areq, nbytes_to_hash, ahash_done); + return common_nonsnoop_hash(edesc, req_ctx->areq, nbytes_to_hash, ahash_d= one); } =20 -static int ahash_update(struct ahash_request *areq) +static void sec1_ahash_process_remaining(struct work_struct *work) +{ + struct talitos_ahash_req_ctx *req_ctx =3D + container_of(work, struct talitos_ahash_req_ctx, + sec1_ahash_process_remaining); + int err =3D 0; + + req_ctx->request_sl =3D scatterwalk_ffwd(req_ctx->request_bufsl, + req_ctx->request_sl, TALITOS1_MAX_DATA_LEN); + + if (req_ctx->remaining_ahash_request_bytes > TALITOS1_MAX_DATA_LEN) + req_ctx->current_ahash_request_bytes =3D TALITOS1_MAX_DATA_LEN; + else { + req_ctx->current_ahash_request_bytes =3D + req_ctx->remaining_ahash_request_bytes; + + if (req_ctx->last_request) + req_ctx->last =3D 1; + } + + err =3D ahash_process_req_one(req_ctx->areq, + req_ctx->current_ahash_request_bytes); + + if (err !=3D -EINPROGRESS) + ahash_request_complete(req_ctx->areq, err); +} + +static int ahash_process_req(struct ahash_request *areq, unsigned int nbyt= es) +{ + struct crypto_ahash *tfm =3D crypto_ahash_reqtfm(areq); + struct talitos_ctx *ctx =3D crypto_ahash_ctx(tfm); + struct device *dev =3D ctx->dev; + struct talitos_ahash_req_ctx *req_ctx =3D ahash_request_ctx(areq); + struct talitos_private *priv =3D dev_get_drvdata(dev); + bool is_sec1 =3D has_ftr_sec1(priv); + + req_ctx->areq =3D areq; + req_ctx->request_sl =3D areq->src; + req_ctx->remaining_ahash_request_bytes =3D nbytes; + + if (is_sec1) { + if (nbytes > TALITOS1_MAX_DATA_LEN) + nbytes =3D TALITOS1_MAX_DATA_LEN; + else if (req_ctx->last_request) + req_ctx->last =3D 1; + } + + req_ctx->current_ahash_request_bytes =3D nbytes; + + return ahash_process_req_one(req_ctx->areq, + req_ctx->current_ahash_request_bytes); +} + +static int ahash_init(struct ahash_request *areq) { + struct crypto_ahash *tfm =3D crypto_ahash_reqtfm(areq); + struct talitos_ctx *ctx =3D crypto_ahash_ctx(tfm); + struct device *dev =3D ctx->dev; struct talitos_ahash_req_ctx *req_ctx =3D ahash_request_ctx(areq); + unsigned int size; + dma_addr_t dma; =20 + /* Initialize the context */ + req_ctx->buf_idx =3D 0; + req_ctx->nbuf =3D 0; + req_ctx->first =3D 1; /* first indicates h/w must init its context */ + req_ctx->swinit =3D 0; /* assume h/w init of context */ + size =3D (crypto_ahash_digestsize(tfm) <=3D SHA256_DIGEST_SIZE) + ? TALITOS_MDEU_CONTEXT_SIZE_MD5_SHA1_SHA256 + : TALITOS_MDEU_CONTEXT_SIZE_SHA384_SHA512; + req_ctx->hw_context_size =3D size; + req_ctx->last_request =3D 0; req_ctx->last =3D 0; + INIT_WORK(&req_ctx->sec1_ahash_process_remaining, sec1_ahash_process_rema= ining); + + dma =3D dma_map_single(dev, req_ctx->hw_context, req_ctx->hw_context_size, + DMA_TO_DEVICE); + dma_unmap_single(dev, dma, req_ctx->hw_context_size, DMA_TO_DEVICE); + + return 0; +} + +/* + * on h/w without explicit sha224 support, we initialize h/w context + * manually with sha224 constants, and tell it to run sha256. + */ +static int ahash_init_sha224_swinit(struct ahash_request *areq) +{ + struct talitos_ahash_req_ctx *req_ctx =3D ahash_request_ctx(areq); + + req_ctx->hw_context[0] =3D SHA224_H0; + req_ctx->hw_context[1] =3D SHA224_H1; + req_ctx->hw_context[2] =3D SHA224_H2; + req_ctx->hw_context[3] =3D SHA224_H3; + req_ctx->hw_context[4] =3D SHA224_H4; + req_ctx->hw_context[5] =3D SHA224_H5; + req_ctx->hw_context[6] =3D SHA224_H6; + req_ctx->hw_context[7] =3D SHA224_H7; + + /* init 64-bit count */ + req_ctx->hw_context[8] =3D 0; + req_ctx->hw_context[9] =3D 0; + + ahash_init(areq); + req_ctx->swinit =3D 1;/* prevent h/w initting context with sha256 values*/ + + return 0; +} + +static int ahash_update(struct ahash_request *areq) +{ + struct talitos_ahash_req_ctx *req_ctx =3D ahash_request_ctx(areq); + + req_ctx->last_request =3D 0; =20 return ahash_process_req(areq, areq->nbytes); } @@ -2103,7 +2181,7 @@ static int ahash_final(struct ahash_request *areq) { struct talitos_ahash_req_ctx *req_ctx =3D ahash_request_ctx(areq); =20 - req_ctx->last =3D 1; + req_ctx->last_request =3D 1; =20 return ahash_process_req(areq, 0); } @@ -2112,7 +2190,7 @@ static int ahash_finup(struct ahash_request *areq) { struct talitos_ahash_req_ctx *req_ctx =3D ahash_request_ctx(areq); =20 - req_ctx->last =3D 1; + req_ctx->last_request =3D 1; =20 return ahash_process_req(areq, areq->nbytes); } --=20 Paul Louvel, Bootlin Embedded Linux and Kernel engineering https://bootlin.com