crypto: aesni - disable "by8" AVX CTR optimization
authorMathias Krause <minipli@googlemail.com>
Tue, 23 Sep 2014 20:31:07 +0000 (22:31 +0200)
committerHerbert Xu <herbert@gondor.apana.org.au>
Wed, 24 Sep 2014 13:15:31 +0000 (21:15 +0800)
The "by8" implementation introduced in commit 22cddcc7df8f ("crypto: aes
- AES CTR x86_64 "by8" AVX optimization") is failing crypto tests as it
handles counter block overflows differently. It only accounts the right
most 32 bit as a counter -- not the whole block as all other
implementations do. This makes it fail the cryptomgr test #4 that
specifically tests this corner case.

As we're quite late in the release cycle, just disable the "by8" variant
for now.

Reported-by: Romain Francoise <romain@orebokech.com>
Signed-off-by: Mathias Krause <minipli@googlemail.com>
Cc: Chandramouli Narayanan <mouli@linux.intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
arch/x86/crypto/aesni-intel_glue.c

index 888950f29fd90f09574db838251725c49e28934a..a7ccd57f19e45531744eeab841c5ac7e41101074 100644 (file)
@@ -481,7 +481,7 @@ static void ctr_crypt_final(struct crypto_aes_ctx *ctx,
        crypto_inc(ctrblk, AES_BLOCK_SIZE);
 }
 
-#ifdef CONFIG_AS_AVX
+#if 0  /* temporary disabled due to failing crypto tests */
 static void aesni_ctr_enc_avx_tfm(struct crypto_aes_ctx *ctx, u8 *out,
                              const u8 *in, unsigned int len, u8 *iv)
 {
@@ -1522,7 +1522,7 @@ static int __init aesni_init(void)
                aesni_gcm_dec_tfm = aesni_gcm_dec;
        }
        aesni_ctr_enc_tfm = aesni_ctr_enc;
-#ifdef CONFIG_AS_AVX
+#if 0  /* temporary disabled due to failing crypto tests */
        if (cpu_has_avx) {
                /* optimize performance of ctr mode encryption transform */
                aesni_ctr_enc_tfm = aesni_ctr_enc_avx_tfm;
This page took 0.027397 seconds and 5 git commands to generate.