arch/x86/crypto/aesni-intel_glue.c | 52 ++++++++++++++---------------- 1 file changed, 25 insertions(+), 27 deletions(-)
The field within the struct aesni_xts_ctx is currently defined as a
byte array, sized to match the struct crypto_aes_ctx. However, it
actually represents the struct data type.
To accurately redefine the data type, some adjustments have to be made
to the address alignment code. It involved refactoring the common
alignment code initially, followed by updating the structure's
definition. Finally, the XTS alignment is now performed early in the
process, rather than at every access point.
This change was suggested during Eric's review of another series
intended to enable an alternative AES implementation [1][2]. I viewed
it as an enhancement to the mainline, independent of the series.
I have divided these changes into incremental pieces, making them
considerably more reviewable and maintainable.
The series is based on the cryptodev's master branch:
git://git.kernel.org/pub/scm/linux/kernel/git/herbert/cryptodev-2.6.git master
Thanks,
Chang
[1] https://lore.kernel.org/all/ZFWQ4sZEVu%2FLHq+Q@gmail.com/
[2] https://lore.kernel.org/all/20230526065414.GB875@sol.localdomain/
Chang S. Bae (3):
crypto: x86/aesni - Refactor the common address alignment code
crypto: x86/aesni - Correct the data type in struct aesni_xts_ctx
crypto: x86/aesni - Perform address alignment early for XTS mode
arch/x86/crypto/aesni-intel_glue.c | 52 ++++++++++++++----------------
1 file changed, 25 insertions(+), 27 deletions(-)
base-commit: 1c43c0f1f84aa59dfc98ce66f0a67b2922aa7f9d
--
2.34.1
V1->V2:
* Drop the unnecessary casts (Eric).
* Put the 'Link:' tag right after 'Suggested-by' (Eric).
---
The field within the struct aesni_xts_ctx is currently defined as a
byte array, sized to match the struct crypto_aes_ctx. However, it
actually represents the struct data type.
To accurately redefine the data type, some adjustments have to be made
to the address alignment code. It involved refactoring the common
alignment code initially, followed by updating the structure's
definition. Finally, the XTS alignment is now performed early in the
process, rather than at every access point.
This change was suggested during Eric's review of another series
intended to enable an alternative AES implementation [1][2]. I viewed
it as an enhancement to the mainline, independent of the series.
I have divided these changes into incremental pieces, making them
considerably more reviewable and maintainable.
The series is based on the cryptodev's master branch:
git://git.kernel.org/pub/scm/linux/kernel/git/herbert/cryptodev-2.6.git master
Thanks,
Chang
[1] https://lore.kernel.org/all/ZFWQ4sZEVu%2FLHq+Q@gmail.com/
[2] https://lore.kernel.org/all/20230526065414.GB875@sol.localdomain/
Chang S. Bae (3):
crypto: x86/aesni - Refactor the common address alignment code
crypto: x86/aesni - Correct the data type in struct aesni_xts_ctx
crypto: x86/aesni - Perform address alignment early for XTS mode
arch/x86/crypto/aesni-intel_glue.c | 52 ++++++++++++++----------------
1 file changed, 25 insertions(+), 27 deletions(-)
base-commit: 1c43c0f1f84aa59dfc98ce66f0a67b2922aa7f9d
--
2.34.1
On Thu, Sep 28, 2023 at 12:25:05AM -0700, Chang S. Bae wrote: > V1->V2: > * Drop the unnecessary casts (Eric). > * Put the 'Link:' tag right after 'Suggested-by' (Eric). > > --- > > The field within the struct aesni_xts_ctx is currently defined as a > byte array, sized to match the struct crypto_aes_ctx. However, it > actually represents the struct data type. > > To accurately redefine the data type, some adjustments have to be made > to the address alignment code. It involved refactoring the common > alignment code initially, followed by updating the structure's > definition. Finally, the XTS alignment is now performed early in the > process, rather than at every access point. > > This change was suggested during Eric's review of another series > intended to enable an alternative AES implementation [1][2]. I viewed > it as an enhancement to the mainline, independent of the series. > > I have divided these changes into incremental pieces, making them > considerably more reviewable and maintainable. > > The series is based on the cryptodev's master branch: > > git://git.kernel.org/pub/scm/linux/kernel/git/herbert/cryptodev-2.6.git master > > Thanks, > Chang > > [1] https://lore.kernel.org/all/ZFWQ4sZEVu%2FLHq+Q@gmail.com/ > [2] https://lore.kernel.org/all/20230526065414.GB875@sol.localdomain/ > > Chang S. Bae (3): > crypto: x86/aesni - Refactor the common address alignment code > crypto: x86/aesni - Correct the data type in struct aesni_xts_ctx > crypto: x86/aesni - Perform address alignment early for XTS mode > > arch/x86/crypto/aesni-intel_glue.c | 52 ++++++++++++++---------------- > 1 file changed, 25 insertions(+), 27 deletions(-) > > > base-commit: 1c43c0f1f84aa59dfc98ce66f0a67b2922aa7f9d > -- > 2.34.1 All applied. Thanks. -- Email: Herbert Xu <herbert@gondor.apana.org.au> Home Page: http://gondor.apana.org.au/~herbert/ PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
The address alignment code has been duplicated for each mode. Instead
of duplicating the same code, refactor the alignment code and simplify
the alignment helpers.
Suggested-by: Eric Biggers <ebiggers@kernel.org>
Link: https://lore.kernel.org/all/20230526065414.GB875@sol.localdomain/
Signed-off-by: Chang S. Bae <chang.seok.bae@intel.com>
Cc: linux-crypto@vger.kernel.org
Cc: x86@kernel.org
Cc: linux-kernel@vger.kernel.org
---
V1->V2: drop the casts (Eric).
---
arch/x86/crypto/aesni-intel_glue.c | 26 ++++++++++----------------
1 file changed, 10 insertions(+), 16 deletions(-)
diff --git a/arch/x86/crypto/aesni-intel_glue.c b/arch/x86/crypto/aesni-intel_glue.c
index 39d6a62ac627..308deeb0c974 100644
--- a/arch/x86/crypto/aesni-intel_glue.c
+++ b/arch/x86/crypto/aesni-intel_glue.c
@@ -80,6 +80,13 @@ struct gcm_context_data {
u8 hash_keys[GCM_BLOCK_LEN * 16];
};
+static inline void *aes_align_addr(void *addr)
+{
+ if (crypto_tfm_ctx_alignment() >= AESNI_ALIGN)
+ return addr;
+ return PTR_ALIGN(addr, AESNI_ALIGN);
+}
+
asmlinkage int aesni_set_key(struct crypto_aes_ctx *ctx, const u8 *in_key,
unsigned int key_len);
asmlinkage void aesni_enc(const void *ctx, u8 *out, const u8 *in);
@@ -201,32 +208,19 @@ static __ro_after_init DEFINE_STATIC_KEY_FALSE(gcm_use_avx2);
static inline struct
aesni_rfc4106_gcm_ctx *aesni_rfc4106_gcm_ctx_get(struct crypto_aead *tfm)
{
- unsigned long align = AESNI_ALIGN;
-
- if (align <= crypto_tfm_ctx_alignment())
- align = 1;
- return PTR_ALIGN(crypto_aead_ctx(tfm), align);
+ return aes_align_addr(crypto_aead_ctx(tfm));
}
static inline struct
generic_gcmaes_ctx *generic_gcmaes_ctx_get(struct crypto_aead *tfm)
{
- unsigned long align = AESNI_ALIGN;
-
- if (align <= crypto_tfm_ctx_alignment())
- align = 1;
- return PTR_ALIGN(crypto_aead_ctx(tfm), align);
+ return aes_align_addr(crypto_aead_ctx(tfm));
}
#endif
static inline struct crypto_aes_ctx *aes_ctx(void *raw_ctx)
{
- unsigned long addr = (unsigned long)raw_ctx;
- unsigned long align = AESNI_ALIGN;
-
- if (align <= crypto_tfm_ctx_alignment())
- align = 1;
- return (struct crypto_aes_ctx *)ALIGN(addr, align);
+ return aes_align_addr(raw_ctx);
}
static int aes_set_key_common(struct crypto_aes_ctx *ctx,
--
2.34.1
Currently, every field in struct aesni_xts_ctx is defined as a byte
array of the same size as struct crypto_aes_ctx. This data type
is obscure and the choice lacks justification.
To rectify this, update the field type in struct aesni_xts_ctx to
match its actual structure.
Suggested-by: Eric Biggers <ebiggers@kernel.org>
Link: https://lore.kernel.org/all/ZFWQ4sZEVu%2FLHq+Q@gmail.com/
Signed-off-by: Chang S. Bae <chang.seok.bae@intel.com>
Cc: linux-crypto@vger.kernel.org
Cc: x86@kernel.org
Cc: linux-kernel@vger.kernel.org
---
arch/x86/crypto/aesni-intel_glue.c | 19 +++++++++----------
1 file changed, 9 insertions(+), 10 deletions(-)
diff --git a/arch/x86/crypto/aesni-intel_glue.c b/arch/x86/crypto/aesni-intel_glue.c
index 308deeb0c974..80e28a01aa3a 100644
--- a/arch/x86/crypto/aesni-intel_glue.c
+++ b/arch/x86/crypto/aesni-intel_glue.c
@@ -61,8 +61,8 @@ struct generic_gcmaes_ctx {
};
struct aesni_xts_ctx {
- u8 raw_tweak_ctx[sizeof(struct crypto_aes_ctx)] AESNI_ALIGN_ATTR;
- u8 raw_crypt_ctx[sizeof(struct crypto_aes_ctx)] AESNI_ALIGN_ATTR;
+ struct crypto_aes_ctx tweak_ctx AESNI_ALIGN_ATTR;
+ struct crypto_aes_ctx crypt_ctx AESNI_ALIGN_ATTR;
};
#define GCM_BLOCK_LEN 16
@@ -885,13 +885,12 @@ static int xts_aesni_setkey(struct crypto_skcipher *tfm, const u8 *key,
keylen /= 2;
/* first half of xts-key is for crypt */
- err = aes_set_key_common(aes_ctx(ctx->raw_crypt_ctx), key, keylen);
+ err = aes_set_key_common(aes_ctx(&ctx->crypt_ctx), key, keylen);
if (err)
return err;
/* second half of xts-key is for tweak */
- return aes_set_key_common(aes_ctx(ctx->raw_tweak_ctx), key + keylen,
- keylen);
+ return aes_set_key_common(aes_ctx(&ctx->tweak_ctx), key + keylen, keylen);
}
static int xts_crypt(struct skcipher_request *req, bool encrypt)
@@ -933,7 +932,7 @@ static int xts_crypt(struct skcipher_request *req, bool encrypt)
kernel_fpu_begin();
/* calculate first value of T */
- aesni_enc(aes_ctx(ctx->raw_tweak_ctx), walk.iv, walk.iv);
+ aesni_enc(aes_ctx(&ctx->tweak_ctx), walk.iv, walk.iv);
while (walk.nbytes > 0) {
int nbytes = walk.nbytes;
@@ -942,11 +941,11 @@ static int xts_crypt(struct skcipher_request *req, bool encrypt)
nbytes &= ~(AES_BLOCK_SIZE - 1);
if (encrypt)
- aesni_xts_encrypt(aes_ctx(ctx->raw_crypt_ctx),
+ aesni_xts_encrypt(aes_ctx(&ctx->crypt_ctx),
walk.dst.virt.addr, walk.src.virt.addr,
nbytes, walk.iv);
else
- aesni_xts_decrypt(aes_ctx(ctx->raw_crypt_ctx),
+ aesni_xts_decrypt(aes_ctx(&ctx->crypt_ctx),
walk.dst.virt.addr, walk.src.virt.addr,
nbytes, walk.iv);
kernel_fpu_end();
@@ -974,11 +973,11 @@ static int xts_crypt(struct skcipher_request *req, bool encrypt)
kernel_fpu_begin();
if (encrypt)
- aesni_xts_encrypt(aes_ctx(ctx->raw_crypt_ctx),
+ aesni_xts_encrypt(aes_ctx(&ctx->crypt_ctx),
walk.dst.virt.addr, walk.src.virt.addr,
walk.nbytes, walk.iv);
else
- aesni_xts_decrypt(aes_ctx(ctx->raw_crypt_ctx),
+ aesni_xts_decrypt(aes_ctx(&ctx->crypt_ctx),
walk.dst.virt.addr, walk.src.virt.addr,
walk.nbytes, walk.iv);
kernel_fpu_end();
--
2.34.1
Currently, the alignment of each field in struct aesni_xts_ctx occurs
right before every access. However, it's possible to perform this
alignment ahead of time.
Introduce a helper function that converts struct crypto_skcipher *tfm
to struct aesni_xts_ctx *ctx and returns an aligned address. Utilize
this helper function at the beginning of each XTS function and then
eliminate redundant alignment code.
Suggested-by: Eric Biggers <ebiggers@kernel.org>
Link: https://lore.kernel.org/all/ZFWQ4sZEVu%2FLHq+Q@gmail.com/
Signed-off-by: Chang S. Bae <chang.seok.bae@intel.com>
Cc: linux-crypto@vger.kernel.org
Cc: x86@kernel.org
Cc: linux-kernel@vger.kernel.org
---
V1->V2: drop the cast (Eric).
---
arch/x86/crypto/aesni-intel_glue.c | 23 ++++++++++++++---------
1 file changed, 14 insertions(+), 9 deletions(-)
diff --git a/arch/x86/crypto/aesni-intel_glue.c b/arch/x86/crypto/aesni-intel_glue.c
index 80e28a01aa3a..b1d90c25975a 100644
--- a/arch/x86/crypto/aesni-intel_glue.c
+++ b/arch/x86/crypto/aesni-intel_glue.c
@@ -223,6 +223,11 @@ static inline struct crypto_aes_ctx *aes_ctx(void *raw_ctx)
return aes_align_addr(raw_ctx);
}
+static inline struct aesni_xts_ctx *aes_xts_ctx(struct crypto_skcipher *tfm)
+{
+ return aes_align_addr(crypto_skcipher_ctx(tfm));
+}
+
static int aes_set_key_common(struct crypto_aes_ctx *ctx,
const u8 *in_key, unsigned int key_len)
{
@@ -875,7 +880,7 @@ static int helper_rfc4106_decrypt(struct aead_request *req)
static int xts_aesni_setkey(struct crypto_skcipher *tfm, const u8 *key,
unsigned int keylen)
{
- struct aesni_xts_ctx *ctx = crypto_skcipher_ctx(tfm);
+ struct aesni_xts_ctx *ctx = aes_xts_ctx(tfm);
int err;
err = xts_verify_key(tfm, key, keylen);
@@ -885,18 +890,18 @@ static int xts_aesni_setkey(struct crypto_skcipher *tfm, const u8 *key,
keylen /= 2;
/* first half of xts-key is for crypt */
- err = aes_set_key_common(aes_ctx(&ctx->crypt_ctx), key, keylen);
+ err = aes_set_key_common(&ctx->crypt_ctx, key, keylen);
if (err)
return err;
/* second half of xts-key is for tweak */
- return aes_set_key_common(aes_ctx(&ctx->tweak_ctx), key + keylen, keylen);
+ return aes_set_key_common(&ctx->tweak_ctx, key + keylen, keylen);
}
static int xts_crypt(struct skcipher_request *req, bool encrypt)
{
struct crypto_skcipher *tfm = crypto_skcipher_reqtfm(req);
- struct aesni_xts_ctx *ctx = crypto_skcipher_ctx(tfm);
+ struct aesni_xts_ctx *ctx = aes_xts_ctx(tfm);
int tail = req->cryptlen % AES_BLOCK_SIZE;
struct skcipher_request subreq;
struct skcipher_walk walk;
@@ -932,7 +937,7 @@ static int xts_crypt(struct skcipher_request *req, bool encrypt)
kernel_fpu_begin();
/* calculate first value of T */
- aesni_enc(aes_ctx(&ctx->tweak_ctx), walk.iv, walk.iv);
+ aesni_enc(&ctx->tweak_ctx, walk.iv, walk.iv);
while (walk.nbytes > 0) {
int nbytes = walk.nbytes;
@@ -941,11 +946,11 @@ static int xts_crypt(struct skcipher_request *req, bool encrypt)
nbytes &= ~(AES_BLOCK_SIZE - 1);
if (encrypt)
- aesni_xts_encrypt(aes_ctx(&ctx->crypt_ctx),
+ aesni_xts_encrypt(&ctx->crypt_ctx,
walk.dst.virt.addr, walk.src.virt.addr,
nbytes, walk.iv);
else
- aesni_xts_decrypt(aes_ctx(&ctx->crypt_ctx),
+ aesni_xts_decrypt(&ctx->crypt_ctx,
walk.dst.virt.addr, walk.src.virt.addr,
nbytes, walk.iv);
kernel_fpu_end();
@@ -973,11 +978,11 @@ static int xts_crypt(struct skcipher_request *req, bool encrypt)
kernel_fpu_begin();
if (encrypt)
- aesni_xts_encrypt(aes_ctx(&ctx->crypt_ctx),
+ aesni_xts_encrypt(&ctx->crypt_ctx,
walk.dst.virt.addr, walk.src.virt.addr,
walk.nbytes, walk.iv);
else
- aesni_xts_decrypt(aes_ctx(&ctx->crypt_ctx),
+ aesni_xts_decrypt(&ctx->crypt_ctx,
walk.dst.virt.addr, walk.src.virt.addr,
walk.nbytes, walk.iv);
kernel_fpu_end();
--
2.34.1
© 2016 - 2026 Red Hat, Inc.