arch/arm64/crypto/aes-neonbs-glue.c | 37 ++++++++++++++++++----------- 1 file changed, 23 insertions(+), 14 deletions(-)
aesbs_setkey() and aesbs_cbc_ctr_setkey() allocate struct crypto_aes_ctx
on the stack. On arm64, the kernel-mode NEON context is also stored on
the stack, causing the combined frame size to exceed 1024 bytes and
triggering -Wframe-larger-than= warnings.
Allocate struct crypto_aes_ctx on the heap instead and use
kfree_sensitive() to ensure the key material is zeroed on free.
Use a goto-based cleanup path to ensure kfree_sensitive() is always
called.
Signed-off-by: Cheng-Yang Chou <yphbchou0911@gmail.com>
---
Changes in v1:
- Replace memzero_explicit() + kfree() with kfree_sensitive()
(Eric Biggers)
- Link to v1: https://lore.kernel.org/all/20260305183229.150599-1-yphbchou0911@gmail.com/
arch/arm64/crypto/aes-neonbs-glue.c | 37 ++++++++++++++++++-----------
1 file changed, 23 insertions(+), 14 deletions(-)
diff --git a/arch/arm64/crypto/aes-neonbs-glue.c b/arch/arm64/crypto/aes-neonbs-glue.c
index cb87c8fc66b3..00530b291010 100644
--- a/arch/arm64/crypto/aes-neonbs-glue.c
+++ b/arch/arm64/crypto/aes-neonbs-glue.c
@@ -76,19 +76,24 @@ static int aesbs_setkey(struct crypto_skcipher *tfm, const u8 *in_key,
unsigned int key_len)
{
struct aesbs_ctx *ctx = crypto_skcipher_ctx(tfm);
- struct crypto_aes_ctx rk;
+ struct crypto_aes_ctx *rk;
int err;
- err = aes_expandkey(&rk, in_key, key_len);
+ rk = kmalloc(sizeof(*rk), GFP_KERNEL);
+ if (!rk)
+ return -ENOMEM;
+
+ err = aes_expandkey(rk, in_key, key_len);
if (err)
- return err;
+ goto out;
ctx->rounds = 6 + key_len / 4;
scoped_ksimd()
- aesbs_convert_key(ctx->rk, rk.key_enc, ctx->rounds);
-
- return 0;
+ aesbs_convert_key(ctx->rk, rk->key_enc, ctx->rounds);
+out:
+ kfree_sensitive(rk);
+ return err;
}
static int __ecb_crypt(struct skcipher_request *req,
@@ -133,22 +138,26 @@ static int aesbs_cbc_ctr_setkey(struct crypto_skcipher *tfm, const u8 *in_key,
unsigned int key_len)
{
struct aesbs_cbc_ctr_ctx *ctx = crypto_skcipher_ctx(tfm);
- struct crypto_aes_ctx rk;
+ struct crypto_aes_ctx *rk;
int err;
- err = aes_expandkey(&rk, in_key, key_len);
+ rk = kmalloc(sizeof(*rk), GFP_KERNEL);
+ if (!rk)
+ return -ENOMEM;
+
+ err = aes_expandkey(rk, in_key, key_len);
if (err)
- return err;
+ goto out;
ctx->key.rounds = 6 + key_len / 4;
- memcpy(ctx->enc, rk.key_enc, sizeof(ctx->enc));
+ memcpy(ctx->enc, rk->key_enc, sizeof(ctx->enc));
scoped_ksimd()
- aesbs_convert_key(ctx->key.rk, rk.key_enc, ctx->key.rounds);
- memzero_explicit(&rk, sizeof(rk));
-
- return 0;
+ aesbs_convert_key(ctx->key.rk, rk->key_enc, ctx->key.rounds);
+out:
+ kfree_sensitive(rk);
+ return err;
}
static int cbc_encrypt(struct skcipher_request *req)
--
2.48.1
On Fri, Mar 06, 2026 at 02:42:54PM +0800, Cheng-Yang Chou wrote:
> aesbs_setkey() and aesbs_cbc_ctr_setkey() allocate struct crypto_aes_ctx
> on the stack. On arm64, the kernel-mode NEON context is also stored on
> the stack, causing the combined frame size to exceed 1024 bytes and
> triggering -Wframe-larger-than= warnings.
>
> Allocate struct crypto_aes_ctx on the heap instead and use
> kfree_sensitive() to ensure the key material is zeroed on free.
> Use a goto-based cleanup path to ensure kfree_sensitive() is always
> called.
>
> Signed-off-by: Cheng-Yang Chou <yphbchou0911@gmail.com>
> ---
> Changes in v1:
> - Replace memzero_explicit() + kfree() with kfree_sensitive()
> (Eric Biggers)
> - Link to v1: https://lore.kernel.org/all/20260305183229.150599-1-yphbchou0911@gmail.com/
>
> arch/arm64/crypto/aes-neonbs-glue.c | 37 ++++++++++++++++++-----------
> 1 file changed, 23 insertions(+), 14 deletions(-)
Looks okay for now, though as I mentioned I'd like to eventually
refactor this code to not need so much temporary space.
I'll plan to take this through the libcrypto-fixes tree. Herbert, let
me know if you prefer to take it instead.
I'll plan to add:
Fixes: 4fa617cc6851 ("arm64/fpsimd: Allocate kernel mode FP/SIMD buffers on the stack")
... since that is the change that put the stack usage over the "limit".
- Eric
On Fri, Mar 06, 2026 at 01:35:02PM -0800, Eric Biggers wrote:
> On Fri, Mar 06, 2026 at 02:42:54PM +0800, Cheng-Yang Chou wrote:
> > aesbs_setkey() and aesbs_cbc_ctr_setkey() allocate struct crypto_aes_ctx
> > on the stack. On arm64, the kernel-mode NEON context is also stored on
> > the stack, causing the combined frame size to exceed 1024 bytes and
> > triggering -Wframe-larger-than= warnings.
> >
> > Allocate struct crypto_aes_ctx on the heap instead and use
> > kfree_sensitive() to ensure the key material is zeroed on free.
> > Use a goto-based cleanup path to ensure kfree_sensitive() is always
> > called.
> >
> > Signed-off-by: Cheng-Yang Chou <yphbchou0911@gmail.com>
> > ---
> > Changes in v1:
> > - Replace memzero_explicit() + kfree() with kfree_sensitive()
> > (Eric Biggers)
> > - Link to v1: https://lore.kernel.org/all/20260305183229.150599-1-yphbchou0911@gmail.com/
> >
> > arch/arm64/crypto/aes-neonbs-glue.c | 37 ++++++++++++++++++-----------
> > 1 file changed, 23 insertions(+), 14 deletions(-)
>
> Looks okay for now, though as I mentioned I'd like to eventually
> refactor this code to not need so much temporary space.
>
> I'll plan to take this through the libcrypto-fixes tree. Herbert, let
> me know if you prefer to take it instead.
>
> I'll plan to add:
>
> Fixes: 4fa617cc6851 ("arm64/fpsimd: Allocate kernel mode FP/SIMD buffers on the stack")
>
> ... since that is the change that put the stack usage over the "limit".
Applied to https://git.kernel.org/pub/scm/linux/kernel/git/ebiggers/linux.git/log/?h=libcrypto-fixes
- Eric
On Fri, Mar 06, 2026 at 01:35:02PM -0800, Eric Biggers wrote: > > I'll plan to take this through the libcrypto-fixes tree. Herbert, let > me know if you prefer to take it instead. No objections from my end. Thanks, -- Email: Herbert Xu <herbert@gondor.apana.org.au> Home Page: http://gondor.apana.org.au/~herbert/ PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
© 2016 - 2026 Red Hat, Inc.