android_kernel_oneplus_msm8998/arch/x86/crypto
Eric Biggers 8a311b0462 crypto: salsa20 - fix blkcipher_walk API usage
commit ecaaab5649781c5a0effdaf298a925063020500e upstream.

When asked to encrypt or decrypt 0 bytes, both the generic and x86
implementations of Salsa20 crash in blkcipher_walk_done(), either when
doing 'kfree(walk->buffer)' or 'free_page((unsigned long)walk->page)',
because walk->buffer and walk->page have not been initialized.

The bug is that Salsa20 is calling blkcipher_walk_done() even when
nothing is in 'walk.nbytes'.  But blkcipher_walk_done() is only meant to
be called when a nonzero number of bytes have been provided.

The broken code is part of an optimization that tries to make only one
call to salsa20_encrypt_bytes() to process inputs that are not evenly
divisible by 64 bytes.  To fix the bug, just remove this "optimization"
and use the blkcipher_walk API the same way all the other users do.

Reproducer:

    #include <linux/if_alg.h>
    #include <sys/socket.h>
    #include <unistd.h>

    int main()
    {
            int algfd, reqfd;
            struct sockaddr_alg addr = {
                    .salg_type = "skcipher",
                    .salg_name = "salsa20",
            };
            char key[16] = { 0 };

            algfd = socket(AF_ALG, SOCK_SEQPACKET, 0);
            bind(algfd, (void *)&addr, sizeof(addr));
            reqfd = accept(algfd, 0, 0);
            setsockopt(algfd, SOL_ALG, ALG_SET_KEY, key, sizeof(key));
            read(reqfd, key, sizeof(key));
    }

Reported-by: syzbot <syzkaller@googlegroups.com>
Fixes: eb6f13eb9f ("[CRYPTO] salsa20_generic: Fix multi-page processing")
Signed-off-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2017-12-20 10:04:51 +01:00
..
sha-mb crypto: x86/sha1-mb - fix panic due to unaligned access 2017-11-15 17:13:12 +01:00
aes-i586-asm_32.S crypto: x86/aes - assembler clean-ups: use ENTRY/ENDPROC, localize jump targets 2013-01-20 10:16:47 +11:00
aes-x86_64-asm_64.S crypto: x86/aes - assembler clean-ups: use ENTRY/ENDPROC, localize jump targets 2013-01-20 10:16:47 +11:00
aes_ctrby8_avx-x86_64.S crypto: aesni - fix "by8" variant for 128 bit keys 2015-01-05 21:35:02 +11:00
aes_glue.c crypto: prefix module autoloading with "crypto-" 2014-11-24 22:43:57 +08:00
aesni-intel_asm.S crypto: aesni - Add support for 192 & 256 bit keys to AESNI RFC4106 2015-01-14 21:56:51 +11:00
aesni-intel_avx-x86_64.S crypto: aesni - fix build on x86 (32bit) 2014-01-15 11:36:34 +08:00
aesni-intel_glue.c crypto: aead - Remove CRYPTO_ALG_AEAD_NEW flag 2015-08-17 16:53:53 +08:00
blowfish-x86_64-asm_64.S crypto: blowfish-x86_64: use ENTRY()/ENDPROC() for assembler functions and localize jump targets 2013-01-20 10:16:48 +11:00
blowfish_glue.c crypto: prefix module autoloading with "crypto-" 2014-11-24 22:43:57 +08:00
camellia-aesni-avx-asm_64.S crypto: x86/camellia-aesni-avx - add more optimized XTS code 2013-04-25 21:01:52 +08:00
camellia-aesni-avx2-asm_64.S crypto: camellia-aesni-avx2 - tune assembly code for more performance 2013-06-21 14:44:23 +08:00
camellia-x86_64-asm_64.S crypto: camellia-x86_64/aes-ni: use ENTRY()/ENDPROC() for assembler functions and localize jump targets 2013-01-20 10:16:48 +11:00
camellia_aesni_avx2_glue.c x86/fpu: Rename XSAVE macros 2015-09-14 12:21:46 +02:00
camellia_aesni_avx_glue.c Merge branch 'x86-fpu-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip 2015-11-03 20:50:26 -08:00
camellia_glue.c crypto: prefix module autoloading with "crypto-" 2014-11-24 22:43:57 +08:00
cast5-avx-x86_64-asm_64.S crypto: cast5-avx: use ENTRY()/ENDPROC() for assembler functions and localize jump targets 2013-01-20 10:16:48 +11:00
cast5_avx_glue.c x86/fpu: Rename XSAVE macros 2015-09-14 12:21:46 +02:00
cast6-avx-x86_64-asm_64.S crypto: cast6-avx: use new optimized XTS code 2013-04-25 21:01:52 +08:00
cast6_avx_glue.c x86/fpu: Rename XSAVE macros 2015-09-14 12:21:46 +02:00
chacha20-avx2-x86_64.S crypto: chacha20 - Add an eight block AVX2 variant for x86_64 2015-07-17 21:20:25 +08:00
chacha20-ssse3-x86_64.S crypto: chacha20-ssse3 - Align stack pointer to 64 bytes 2016-02-17 12:31:04 -08:00
chacha20_glue.c x86/fpu: Rename XSAVE macros 2015-09-14 12:21:46 +02:00
crc32-pclmul_asm.S x86, crc32-pclmul: Fix build with older binutils 2013-05-30 16:36:23 -07:00
crc32-pclmul_glue.c x86/fpu: Rename i387.h to fpu/api.h 2015-05-19 15:47:30 +02:00
crc32c-intel_glue.c x86/fpu: Rename fpu-internal.h to fpu/internal.h 2015-05-19 15:47:31 +02:00
crc32c-pcl-intel-asm_64.S crypto: crc32c-pclmul - use .rodata instead of .rotata 2015-09-21 23:05:57 +08:00
crct10dif-pcl-asm_64.S Reinstate "crypto: crct10dif - Wrap crc_t10dif function all to use crypto transform framework" 2013-09-07 12:56:26 +10:00
crct10dif-pclmul_glue.c x86/fpu: Rename i387.h to fpu/api.h 2015-05-19 15:47:30 +02:00
des3_ede-asm_64.S crypto: des_3des - add x86-64 assembly implementation 2014-06-20 21:27:58 +08:00
des3_ede_glue.c crypto: x86/des3_ede - drop bogus module aliases 2015-01-13 22:30:52 +11:00
fpu.c Merge git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6 2015-06-22 21:04:48 -07:00
ghash-clmulni-intel_asm.S crypto: ghash-clmulni-intel - Use u128 instead of be128 for internal key 2014-04-04 21:06:14 +08:00
ghash-clmulni-intel_glue.c crypto: ghash-clmulni - Fix load failure 2017-03-26 12:13:17 +02:00
glue_helper-asm-avx.S crypto: x86 - add more optimized XTS-mode for serpent-avx 2013-04-25 21:01:51 +08:00
glue_helper-asm-avx2.S crypto: twofish - add AVX2/x86_64 assembler implementation of twofish cipher 2013-04-25 21:09:05 +08:00
glue_helper.c crypto: don't export static symbol 2015-03-13 21:37:15 +11:00
Makefile crypto: x86/sha - Add build support for Intel SHA Extensions optimized SHA1 and SHA256 2015-09-21 22:01:06 +08:00
poly1305-avx2-x86_64.S crypto: poly1305 - Add a four block AVX2 variant for x86_64 2015-07-17 21:20:29 +08:00
poly1305-sse2-x86_64.S crypto: poly1305 - Add a two block SSE2 variant for x86_64 2015-07-17 21:20:28 +08:00
poly1305_glue.c x86/fpu: Rename XSAVE macros 2015-09-14 12:21:46 +02:00
salsa20-i586-asm_32.S crypto: x86/salsa20 - assembler cleanup, use ENTRY/ENDPROC for assember functions and rename ECRYPT_* to salsa20_* 2013-01-20 10:16:50 +11:00
salsa20-x86_64-asm_64.S crypto: x86/salsa20 - assembler cleanup, use ENTRY/ENDPROC for assember functions and rename ECRYPT_* to salsa20_* 2013-01-20 10:16:50 +11:00
salsa20_glue.c crypto: salsa20 - fix blkcipher_walk API usage 2017-12-20 10:04:51 +01:00
serpent-avx-x86_64-asm_64.S crypto: x86 - add more optimized XTS-mode for serpent-avx 2013-04-25 21:01:51 +08:00
serpent-avx2-asm_64.S crypto: serpent - add AVX2/x86_64 assembler implementation of serpent cipher 2013-04-25 21:09:07 +08:00
serpent-sse2-i586-asm_32.S crypto: x86/serpent - use ENTRY/ENDPROC for assember functions and localize jump targets 2013-01-20 10:16:50 +11:00
serpent-sse2-x86_64-asm_64.S crypto: x86/serpent - use ENTRY/ENDPROC for assember functions and localize jump targets 2013-01-20 10:16:50 +11:00
serpent_avx2_glue.c x86/fpu: Rename XSAVE macros 2015-09-14 12:21:46 +02:00
serpent_avx_glue.c x86/fpu: Rename XSAVE macros 2015-09-14 12:21:46 +02:00
serpent_sse2_glue.c crypto: serpent_sse2 - mark Serpent SSE2 helper ciphers 2015-03-31 21:21:10 +08:00
sha1_avx2_x86_64_asm.S crypto: x86/sha1 - Fix reads beyond the number of blocks passed 2017-08-24 17:02:35 -07:00
sha1_ni_asm.S crypto: x86/sha - Intel SHA Extensions optimized SHA1 transform function 2015-09-21 22:01:05 +08:00
sha1_ssse3_asm.S crypto: x86/sha1 - assembler clean-ups: use ENTRY/ENDPROC 2013-01-20 10:16:51 +11:00
sha1_ssse3_glue.c crypto: x86/sha1 - Fix reads beyond the number of blocks passed 2017-08-24 17:02:35 -07:00
sha256-avx-asm.S crypto: x86/sha256_ssse3 - move SHA-224/256 SSSE3 implementation to base layer 2015-04-10 21:39:47 +08:00
sha256-avx2-asm.S crypto: x86/sha256_ssse3 - move SHA-224/256 SSSE3 implementation to base layer 2015-04-10 21:39:47 +08:00
sha256-ssse3-asm.S crypto: x86/sha256_ssse3 - move SHA-224/256 SSSE3 implementation to base layer 2015-04-10 21:39:47 +08:00
sha256_ni_asm.S crypto: x86/sha - Intel SHA Extensions optimized SHA256 transform function 2015-09-21 22:01:06 +08:00
sha256_ssse3_glue.c Merge branch 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6 2015-11-04 09:11:12 -08:00
sha512-avx-asm.S crypto: x86/sha512_ssse3 - move SHA-384/512 SSSE3 implementation to base layer 2015-04-10 21:39:48 +08:00
sha512-avx2-asm.S crypto: x86/sha512_ssse3 - fixup for asm function prototype change 2015-04-24 20:09:01 +08:00
sha512-ssse3-asm.S crypto: x86/sha512_ssse3 - move SHA-384/512 SSSE3 implementation to base layer 2015-04-10 21:39:48 +08:00
sha512_ssse3_glue.c Merge branch 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6 2015-11-04 09:11:12 -08:00
twofish-avx-x86_64-asm_64.S crypto: x86/twofish-avx - use optimized XTS code 2013-04-25 21:01:51 +08:00
twofish-i586-asm_32.S crypto: x86/twofish - assembler clean-ups: use ENTRY/ENDPROC, localize jump labels 2013-01-20 10:16:51 +11:00
twofish-x86_64-asm_64-3way.S crypto: x86/twofish - assembler clean-ups: use ENTRY/ENDPROC, localize jump labels 2013-01-20 10:16:51 +11:00
twofish-x86_64-asm_64.S x86/asm: Replace "MOVQ $imm, %reg" with MOVL 2015-04-01 13:17:39 +02:00
twofish_avx_glue.c x86/fpu: Fixup uninitialized feature_name warning 2015-09-24 09:21:20 +02:00
twofish_glue.c crypto: prefix module autoloading with "crypto-" 2014-11-24 22:43:57 +08:00
twofish_glue_3way.c crypto: prefix module autoloading with "crypto-" 2014-11-24 22:43:57 +08:00