aboutsummaryrefslogtreecommitdiffstats
path: root/arch/arm/crypto (follow)
AgeCommit message (Collapse)AuthorFilesLines
2019-09-09crypto: arm/aes-ce - implement ciphertext stealing for CBCArd Biesheuvel2-17/+256
Instead of relying on the CTS template to wrap the accelerated CBC skcipher, implement the ciphertext stealing part directly. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-09-09crypto: arm/aes-neonbs - implement ciphertext stealing for XTSArd Biesheuvel2-13/+72
Update the AES-XTS implementation based on NEON instructions so that it can deal with inputs whose size is not a multiple of the cipher block size. This is part of the original XTS specification, but was never implemented before in the Linux kernel. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-09-09crypto: arm/aes-ce - implement ciphertext stealing for XTSArd Biesheuvel2-23/+208
Update the AES-XTS implementation based on AES instructions so that it can deal with inputs whose size is not a multiple of the cipher block size. This is part of the original XTS specification, but was never implemented before in the Linux kernel. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-09-09crypto: arm/aes-neonbs - replace tweak mask literal with compositionArd Biesheuvel1-5/+3
Replace the vector load from memory sequence with a simple instruction sequence to compose the tweak vector directly. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-09-09crypto: arm/aes-ce - replace tweak mask literal with compositionArd Biesheuvel1-6/+3
Replace the vector load from memory sequence with a simple instruction sequence to compose the tweak vector directly. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-09-09crypto: arm/aes-ce - switch to 4x interleaveArd Biesheuvel1-119/+144
When the ARM AES instruction based crypto driver was introduced, there were no known implementations that could benefit from a 4-way interleave, and so a 3-way interleave was used instead. Since we have sufficient space in the SIMD register file, let's switch to a 4-way interleave to align with the 64-bit driver, and to ensure that we can reach optimum performance when running under emulation on high end 64-bit cores. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-09-09crypto: arm/aes-ce - yield the SIMD unit between scatterwalk stepsArd Biesheuvel2-35/+34
Reduce the scope of the kernel_neon_begin/end regions so that the SIMD unit is released (and thus preemption re-enabled) if the crypto operation cannot be completed in a single scatterwalk step. This avoids scheduling blackouts due to preemption being enabled for unbounded periods, resulting in a more responsive system. After this change, we can also permit the cipher_walk infrastructure to sleep, so set the 'atomic' parameter to skcipher_walk_virt() to false as well. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-09-09crypto: arm/aes - fix round key prototypesArd Biesheuvel2-29/+29
The AES round keys are arrays of u32s in native endianness now, so update the function prototypes accordingly. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-09-05crypto: arm - Rename functions to avoid conflict with crypto/sha256.hHans de Goede2-16/+16
Rename static / file-local functions so that they do not conflict with the functions declared in crypto/sha256.h. This is a preparation patch for folding crypto/sha256.h into crypto/sha.h. Signed-off-by: Hans de Goede <hdegoede@redhat.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-07-27crypto: ghash - add comment and improve help textEric Biggers1-1/+1
To help avoid confusion, add a comment to ghash-generic.c which explains the convention that the kernel's implementation of GHASH uses. Also update the Kconfig help text and module descriptions to call GHASH a "hash function" rather than a "message digest", since the latter normally means a real cryptographic hash function, which GHASH is not. Cc: Pascal Van Leeuwen <pvanleeuwen@verimatrix.com> Signed-off-by: Eric Biggers <ebiggers@google.com> Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: Pascal Van Leeuwen <pvanleeuwen@verimatrix.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-07-26crypto: arm/aes-scalar - unexport en/decryption routinesArd Biesheuvel1-3/+0
The scalar table based AES routines are not used by other drivers, so let's keep it that way and unexport the symbols. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-07-26crypto: arm/aes-cipher - switch to shared AES inverse SboxArd Biesheuvel1-39/+1
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-07-26crypto: arm/ghash - provide a synchronous versionArd Biesheuvel1-26/+52
GHASH is used by the GCM mode, which is often used in contexts where only synchronous ciphers are permitted. So provide a synchronous version of GHASH based on the existing code. This requires a non-SIMD fallback to deal with invocations occurring from a context where SIMD instructions may not be used. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-07-26crypto: arm/aes-neonbs - provide a synchronous version of ctr(aes)Ard Biesheuvel1-0/+65
AES in CTR mode is used by modes such as GCM and CCM, which are often used in contexts where only synchronous ciphers are permitted. So provide a synchronous version of ctr(aes) based on the existing code. This requires a non-SIMD fallback to deal with invocations occurring from a context where SIMD instructions may not be used. We have a helper for this now in the AES library, so wire that up. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-07-26crypto: arm/aes-ce - provide a synchronous version of ctr(aes)Ard Biesheuvel1-0/+43
AES in CTR mode is used by modes such as GCM and CCM, which are often used in contexts where only synchronous ciphers are permitted. So provide a synchronous version of ctr(aes) based on the existing code. This requires a non-SIMD fallback to deal with invocations occurring from a context where SIMD instructions may not be used. We have a helper for this now in the AES library, so wire that up. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-07-26crypto: arm/aes - use native endiannes for key scheduleArd Biesheuvel2-16/+13
Align ARM's hw instruction based AES implementation with other versions that keep the key schedule in native endianness. This will allow us to merge the various implementations going forward. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-07-26crypto: arm/aes-neonbs - switch to library version of key expansion routineArd Biesheuvel2-3/+3
Switch to the new AES library that also provides an implementation of the AES key expansion routine. This removes the dependency on the generic AES cipher, allowing it to be omitted entirely in the future. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-07-26crypto: aes - rename local routines to prevent future clashesArd Biesheuvel1-4/+4
Rename some local AES encrypt/decrypt routines so they don't clash with the names we are about to introduce for the routines exposed by the generic AES library. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-07-26crypto: arm/aes-ce - cosmetic/whitespace cleanupArd Biesheuvel1-60/+56
Rearrange the aes_algs[] array for legibility. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-07-08Merge branch 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6Linus Torvalds2-2/+2
Pull crypto updates from Herbert Xu: "Here is the crypto update for 5.3: API: - Test shash interface directly in testmgr - cra_driver_name is now mandatory Algorithms: - Replace arc4 crypto_cipher with library helper - Implement 5 way interleave for ECB, CBC and CTR on arm64 - Add xxhash - Add continuous self-test on noise source to drbg - Update jitter RNG Drivers: - Add support for SHA204A random number generator - Add support for 7211 in iproc-rng200 - Fix fuzz test failures in inside-secure - Fix fuzz test failures in talitos - Fix fuzz test failures in qat" * 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (143 commits) crypto: stm32/hash - remove interruptible condition for dma crypto: stm32/hash - Fix hmac issue more than 256 bytes crypto: stm32/crc32 - rename driver file crypto: amcc - remove memset after dma_alloc_coherent crypto: ccp - Switch to SPDX license identifiers crypto: ccp - Validate the the error value used to index error messages crypto: doc - Fix formatting of new crypto engine content crypto: doc - Add parameter documentation crypto: arm64/aes-ce - implement 5 way interleave for ECB, CBC and CTR crypto: arm64/aes-ce - add 5 way interleave routines crypto: talitos - drop icv_ool crypto: talitos - fix hash on SEC1. crypto: talitos - move struct talitos_edesc into talitos.h lib/scatterlist: Fix mapping iterator when sg->offset is greater than PAGE_SIZE crypto/NX: Set receive window credits to max number of CRBs in RxFIFO crypto: asymmetric_keys - select CRYPTO_HASH where needed crypto: serpent - mark __serpent_setkey_sbox noinline crypto: testmgr - dynamically allocate crypto_shash crypto: testmgr - dynamically allocate testvec_config crypto: talitos - eliminate unneeded 'done' functions at build time ...
2019-06-19treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 500Thomas Gleixner16-64/+16
Based on 2 normalized pattern(s): this program is free software you can redistribute it and or modify it under the terms of the gnu general public license version 2 as published by the free software foundation this program is free software you can redistribute it and or modify it under the terms of the gnu general public license version 2 as published by the free software foundation # extracted by the scancode license scanner the SPDX license identifier GPL-2.0-only has been chosen to replace the boilerplate/reference in 4122 file(s). Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Enrico Weigelt <info@metux.net> Reviewed-by: Kate Stewart <kstewart@linuxfoundation.org> Reviewed-by: Allison Randal <allison@lohutok.net> Cc: linux-spdx@vger.kernel.org Link: https://lkml.kernel.org/r/20190604081206.933168790@linutronix.de Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-06-13crypto: chacha - constify ctx and iv argumentsEric Biggers1-1/+1
Constify the ctx and iv arguments to crypto_chacha_init() and the various chacha*_stream_xor() functions. This makes it clear that they are not modified. Signed-off-by: Eric Biggers <ebiggers@google.com> Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-05-30treewide: Replace GPLv2 boilerplate/reference with SPDX - rule 152Thomas Gleixner5-29/+5
Based on 1 normalized pattern(s): this program is free software you can redistribute it and or modify it under the terms of the gnu general public license as published by the free software foundation either version 2 of the license or at your option any later version extracted by the scancode license scanner the SPDX license identifier GPL-2.0-or-later has been chosen to replace the boilerplate/reference in 3029 file(s). Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Allison Randal <allison@lohutok.net> Cc: linux-spdx@vger.kernel.org Link: https://lkml.kernel.org/r/20190527070032.746973796@linutronix.de Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2019-05-23crypto: arm/sha512 - Make sha512_arm_final staticYueHaibing1-1/+1
Fix sparse warning: arch/arm/crypto/sha512-glue.c:40:5: warning: symbol 'sha512_arm_final' was not declared. Should it be static? Reported-by: Hulk Robot <hulkci@huawei.com> Signed-off-by: YueHaibing <yuehaibing@huawei.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-04-25crypto: shash - remove shash_desc::flagsEric Biggers1-3/+0
The flags field in 'struct shash_desc' never actually does anything. The only ostensibly supported flag is CRYPTO_TFM_REQ_MAY_SLEEP. However, no shash algorithm ever sleeps, making this flag a no-op. With this being the case, inevitably some users who can't sleep wrongly pass MAY_SLEEP. These would all need to be fixed if any shash algorithm actually started sleeping. For example, the shash_ahash_*() functions, which wrap a shash algorithm with the ahash API, pass through MAY_SLEEP from the ahash API to the shash API. However, the shash functions are called under kmap_atomic(), so actually they're assumed to never sleep. Even if it turns out that some users do need preemption points while hashing large buffers, we could easily provide a helper function crypto_shash_update_large() which divides the data into smaller chunks and calls crypto_shash_update() and cond_resched() for each chunk. It's not necessary to have a flag in 'struct shash_desc', nor is it necessary to make individual shash algorithms aware of this at all. Therefore, remove shash_desc::flags, and document that the crypto_shash_*() functions can be called from any context. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-04-18crypto: arm/aes-neonbs - don't access already-freed walk.ivEric Biggers1-0/+2
If the user-provided IV needs to be aligned to the algorithm's alignmask, then skcipher_walk_virt() copies the IV into a new aligned buffer walk.iv. But skcipher_walk_virt() can fail afterwards, and then if the caller unconditionally accesses walk.iv, it's a use-after-free. arm32 xts-aes-neonbs doesn't set an alignmask, so currently it isn't affected by this despite unconditionally accessing walk.iv. However this is more subtle than desired, and it was actually broken prior to the alignmask being removed by commit cc477bf64573 ("crypto: arm/aes - replace bit-sliced OpenSSL NEON code"). Thus, update xts-aes-neonbs to start checking the return value of skcipher_walk_virt(). Fixes: e4e7f10bfc40 ("ARM: add support for bit sliced AES using NEON instructions") Cc: <stable@vger.kernel.org> # v3.13+ Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-03-22crypto: arm - convert to use crypto_simd_usable()Eric Biggers10-19/+29
Replace all calls to may_use_simd() in the arm crypto code with crypto_simd_usable(), in order to allow testing the no-SIMD code paths. Signed-off-by: Eric Biggers <ebiggers@google.com> Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-03-05Merge branch 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6Linus Torvalds3-338/+281
Pull crypto update from Herbert Xu: "API: - Add helper for simple skcipher modes. - Add helper to register multiple templates. - Set CRYPTO_TFM_NEED_KEY when setkey fails. - Require neither or both of export/import in shash. - AEAD decryption test vectors are now generated from encryption ones. - New option CONFIG_CRYPTO_MANAGER_EXTRA_TESTS that includes random fuzzing. Algorithms: - Conversions to skcipher and helper for many templates. - Add more test vectors for nhpoly1305 and adiantum. Drivers: - Add crypto4xx prng support. - Add xcbc/cmac/ecb support in caam. - Add AES support for Exynos5433 in s5p. - Remove sha384/sha512 from artpec7 as hardware cannot do partial hash" [ There is a merge of the Freescale SoC tree in order to pull in changes required by patches to the caam/qi2 driver. ] * 'linus' of git://git.kernel.org/pub/scm/linux/kernel/git/herbert/crypto-2.6: (174 commits) crypto: s5p - add AES support for Exynos5433 dt-bindings: crypto: document Exynos5433 SlimSSS crypto: crypto4xx - add missing of_node_put after of_device_is_available crypto: cavium/zip - fix collision with generic cra_driver_name crypto: af_alg - use struct_size() in sock_kfree_s() crypto: caam - remove redundant likely/unlikely annotation crypto: s5p - update iv after AES-CBC op end crypto: x86/poly1305 - Clear key material from stack in SSE2 variant crypto: caam - generate hash keys in-place crypto: caam - fix DMA mapping xcbc key twice crypto: caam - fix hash context DMA unmap size hwrng: bcm2835 - fix probe as platform device crypto: s5p-sss - Use AES_BLOCK_SIZE define instead of number crypto: stm32 - drop pointless static qualifier in stm32_hash_remove() crypto: chelsio - Fixed Traffic Stall crypto: marvell - Remove set but not used variable 'ivsize' crypto: ccp - Update driver messages to remove some confusion crypto: adiantum - add 1536 and 4096-byte test vectors crypto: nhpoly1305 - add a test vector with len % 16 != 0 crypto: arm/aes-ce - update IV after partial final CTR block ...
2019-02-22crypto: arm/aes-ce - update IV after partial final CTR blockEric Biggers1-13/+13
Make the arm ctr-aes-ce algorithm update the IV buffer to contain the next counter after processing a partial final block, rather than leave it as the last counter. This makes ctr-aes-ce pass the updated AES-CTR tests. This change also makes the code match the arm64 version in arch/arm64/crypto/aes-modes.S more closely. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-02-22crypto: sha512/arm - fix crash bug in Thumb2 buildArd Biesheuvel2-2/+4
The SHA512 code we adopted from the OpenSSL project uses a rather peculiar way to take the address of the round constant table: it takes the address of the sha256_block_data_order() routine, and substracts a constant known quantity to arrive at the base of the table, which is emitted by the same assembler code right before the routine's entry point. However, recent versions of binutils have helpfully changed the behavior of references emitted via an ADR instruction when running in Thumb2 mode: it now takes the Thumb execution mode bit into account, which is bit 0 af the address. This means the produced table address also has bit 0 set, and so we end up with an address value pointing 1 byte past the start of the table, which results in crashes such as Unable to handle kernel paging request at virtual address bf825000 pgd = 42f44b11 [bf825000] *pgd=80000040206003, *pmd=5f1bd003, *pte=00000000 Internal error: Oops: 207 [#1] PREEMPT SMP THUMB2 Modules linked in: sha256_arm(+) sha1_arm_ce sha1_arm ... CPU: 7 PID: 396 Comm: cryptomgr_test Not tainted 5.0.0-rc6+ #144 Hardware name: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 PC is at sha256_block_data_order+0xaaa/0xb30 [sha256_arm] LR is at __this_module+0x17fd/0xffffe800 [sha256_arm] pc : [<bf820bca>] lr : [<bf824ffd>] psr: 800b0033 sp : ebc8bbe8 ip : faaabe1c fp : 2fdd3433 r10: 4c5f1692 r9 : e43037df r8 : b04b0a5a r7 : c369d722 r6 : 39c3693e r5 : 7a013189 r4 : 1580d26b r3 : 8762a9b0 r2 : eea9c2cd r1 : 3e9ab536 r0 : 1dea4ae7 Flags: Nzcv IRQs on FIQs on Mode SVC_32 ISA Thumb Segment user Control: 70c5383d Table: 6b8467c0 DAC: dbadc0de Process cryptomgr_test (pid: 396, stack limit = 0x69e1fe23) Stack: (0xebc8bbe8 to 0xebc8c000) ... unwind: Unknown symbol address bf820bca unwind: Index not found bf820bca Code: 441a ea80 40f9 440a (f85e) 3b04 ---[ end trace e560cce92700ef8a ]--- Given that this affects older kernels as well, in case they are built with a recent toolchain, apply a minimal backportable fix, which is to emit another non-code label at the start of the routine, and reference that instead. (This is similar to the current upstream state of this file in OpenSSL) Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-02-22crypto: sha256/arm - fix crash bug in Thumb2 buildArd Biesheuvel2-2/+4
The SHA256 code we adopted from the OpenSSL project uses a rather peculiar way to take the address of the round constant table: it takes the address of the sha256_block_data_order() routine, and substracts a constant known quantity to arrive at the base of the table, which is emitted by the same assembler code right before the routine's entry point. However, recent versions of binutils have helpfully changed the behavior of references emitted via an ADR instruction when running in Thumb2 mode: it now takes the Thumb execution mode bit into account, which is bit 0 af the address. This means the produced table address also has bit 0 set, and so we end up with an address value pointing 1 byte past the start of the table, which results in crashes such as Unable to handle kernel paging request at virtual address bf825000 pgd = 42f44b11 [bf825000] *pgd=80000040206003, *pmd=5f1bd003, *pte=00000000 Internal error: Oops: 207 [#1] PREEMPT SMP THUMB2 Modules linked in: sha256_arm(+) sha1_arm_ce sha1_arm ... CPU: 7 PID: 396 Comm: cryptomgr_test Not tainted 5.0.0-rc6+ #144 Hardware name: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 PC is at sha256_block_data_order+0xaaa/0xb30 [sha256_arm] LR is at __this_module+0x17fd/0xffffe800 [sha256_arm] pc : [<bf820bca>] lr : [<bf824ffd>] psr: 800b0033 sp : ebc8bbe8 ip : faaabe1c fp : 2fdd3433 r10: 4c5f1692 r9 : e43037df r8 : b04b0a5a r7 : c369d722 r6 : 39c3693e r5 : 7a013189 r4 : 1580d26b r3 : 8762a9b0 r2 : eea9c2cd r1 : 3e9ab536 r0 : 1dea4ae7 Flags: Nzcv IRQs on FIQs on Mode SVC_32 ISA Thumb Segment user Control: 70c5383d Table: 6b8467c0 DAC: dbadc0de Process cryptomgr_test (pid: 396, stack limit = 0x69e1fe23) Stack: (0xebc8bbe8 to 0xebc8c000) ... unwind: Unknown symbol address bf820bca unwind: Index not found bf820bca Code: 441a ea80 40f9 440a (f85e) 3b04 ---[ end trace e560cce92700ef8a ]--- Given that this affects older kernels as well, in case they are built with a recent toolchain, apply a minimal backportable fix, which is to emit another non-code label at the start of the routine, and reference that instead. (This is similar to the current upstream state of this file in OpenSSL) Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-02-08crypto: arm/crct10dif-ce - cleanup and optimizationsEric Biggers2-294/+261
The x86, arm, and arm64 asm implementations of crct10dif are very difficult to understand partly because many of the comments, labels, and macros are named incorrectly: the lengths mentioned are usually off by a factor of two from the actual code. Many other things are unnecessarily convoluted as well, e.g. there are many more fold constants than actually needed and some aren't fully reduced. This series therefore cleans up all these implementations to be much more maintainable. I also made some small optimizations where I saw opportunities, resulting in slightly better performance. This patch cleans up the arm version. (Also moved the constants to .rodata as suggested by Ard Biesheuvel.) Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-02-01crypto: arm/crct10dif - remove dead codeArd Biesheuvel1-13/+0
Remove some code that is no longer called now that we make sure never to invoke the SIMD routine with less that 16 bytes of input. Reviewed-by: Eric Biggers <ebiggers@kernel.org> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2019-02-01crypto: arm/crct10dif - revert to C code for short inputsArd Biesheuvel2-24/+13
The SIMD routine ported from x86 used to have a special code path for inputs < 16 bytes, which got lost somewhere along the way. Instead, the current glue code aligns the input pointer to permit the NEON routine to use special versions of the vld1 instructions that assume 16 byte alignment, but this could result in inputs of less than 16 bytes to be passed in. This not only fails the new extended tests that Eric has implemented, it also results in the code reading past the end of the input, which could potentially result in crashes when dealing with less than 16 bytes of input at the end of a page which is followed by an unmapped page. So update the glue code to only invoke the NEON routine if the input is at least 16 bytes. Reported-by: Eric Biggers <ebiggers@kernel.org> Reviewed-by: Eric Biggers <ebiggers@kernel.org> Fixes: 1d481f1cd892 ("crypto: arm/crct10dif - port x86 SSE implementation to ARM") Cc: <stable@vger.kernel.org> # v4.10+ Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-12-29Merge tag 'kbuild-v4.21' of git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuildLinus Torvalds1-1/+1
Pull Kbuild updates from Masahiro Yamada: "Kbuild core: - remove unneeded $(call cc-option,...) switches - consolidate Clang compiler flags into CLANG_FLAGS - announce the deprecation of SUBDIRS - fix single target build for external module - simplify the dependencies of 'prepare' stage targets - allow fixdep to directly write to .*.cmd files - simplify dependency generation for CONFIG_TRIM_UNUSED_KSYMS - change if_changed_rule to accept multi-line recipe - move .SECONDARY special target to scripts/Kbuild.include - remove redundant 'set -e' - improve parallel execution for CONFIG_HEADERS_CHECK - misc cleanups Treewide fixes and cleanups - set Clang flags correctly for PowerPC boot images - fix UML build error with CONFIG_GCC_PLUGINS - remove unneeded patterns from .gitignore files - refactor firmware/Makefile - remove unneeded rules for *offsets.s - avoid unneeded regeneration of intermediate .s files - clean up ./Kbuild Modpost: - remove unused -M, -K options - fix false positive warnings about section mismatch - use simple devtable lookup instead of linker magic - misc cleanups Coccinelle: - relax boolinit.cocci checks for overall consistency - fix warning messages of boolinit.cocci Other tools: - improve -dirty check of scripts/setlocalversion - add a tool to generate compile_commands.json from .*.cmd files" * tag 'kbuild-v4.21' of git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild: (51 commits) kbuild: remove unused cmd_gentimeconst kbuild: remove $(obj)/ prefixes in ./Kbuild treewide: add intermediate .s files to targets treewide: remove explicit rules for *offsets.s firmware: refactor firmware/Makefile firmware: remove unnecessary patterns from .gitignore scripts: remove unnecessary ihex2fw and check-lc_ctypes from .gitignore um: remove unused filechk_gen_header in Makefile scripts: add a tool to produce a compile_commands.json file kbuild: add -Werror=implicit-int flag unconditionally kbuild: add -Werror=strict-prototypes flag unconditionally kbuild: add -fno-PIE flag unconditionally scripts: coccinelle: Correct warning message scripts: coccinelle: only suggest true/false in files that already use them kbuild: handle part-of-module correctly for *.ll and *.symtypes kbuild: refactor part-of-module kbuild: refactor quiet_modtag kbuild: remove redundant quiet_modtag for $(obj-m) kbuild: refactor Makefile.asm-generic user/Makefile: Fix typo and capitalization in comment section ...
2018-12-02kbuild: move .SECONDARY special target to Kbuild.includeMasahiro Yamada1-1/+1
In commit 54a702f70589 ("kbuild: mark $(targets) as .SECONDARY and remove .PRECIOUS markers"), I missed one important feature of the .SECONDARY target: .SECONDARY with no prerequisites causes all targets to be treated as secondary. ... which agrees with the policy of Kbuild. Let's move it to scripts/Kbuild.include, with no prerequisites. Note: If an intermediate file is generated by $(call if_changed,...), you still need to add it to "targets" so its .*.cmd file is included. The arm/arm64 crypto files are generated by $(call cmd,shipped), so they do not need to be added to "targets", but need to be added to "clean-files" so "make clean" can properly clean them away. Signed-off-by: Masahiro Yamada <yamada.masahiro@socionext.com>
2018-11-20crypto: arm/nhpoly1305 - add NEON-accelerated NHPoly1305Eric Biggers4-0/+200
Add an ARM NEON implementation of NHPoly1305, an ε-almost-∆-universal hash function used in the Adiantum encryption mode. For now, only the NH portion is actually NEON-accelerated; the Poly1305 part is less performance-critical so is just implemented in C. Signed-off-by: Eric Biggers <ebiggers@google.com> Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-11-20crypto: arm/chacha - add XChaCha12 supportEric Biggers2-2/+21
Now that the 32-bit ARM NEON implementation of ChaCha20 and XChaCha20 has been refactored to support varying the number of rounds, add support for XChaCha12. This is identical to XChaCha20 except for the number of rounds, which is 12 instead of 20. XChaCha12 is faster than XChaCha20 but has a lower security margin, though still greater than AES-256's since the best known attacks make it through only 7 rounds. See the patch "crypto: chacha - add XChaCha12 support" for more details about why we need XChaCha12 support. Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-11-20crypto: arm/chacha20 - refactor to allow varying number of roundsEric Biggers3-48/+56
In preparation for adding XChaCha12 support, rename/refactor the NEON implementation of ChaCha20 to support different numbers of rounds. Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-11-20crypto: arm/chacha20 - add XChaCha20 supportEric Biggers3-49/+126
Add an XChaCha20 implementation that is hooked up to the ARM NEON implementation of ChaCha20. This is needed for use in the Adiantum encryption mode; see the generic code patch, "crypto: chacha20-generic - add XChaCha20 support", for more details. We also update the NEON code to support HChaCha20 on one block, so we can use that in XChaCha20 rather than calling the generic HChaCha20. This required factoring the permutation out into its own macro. Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-11-20crypto: arm/chacha20 - limit the preemption-disabled sectionEric Biggers1-3/+3
To improve responsivesess, disable preemption for each step of the walk (which is at most PAGE_SIZE) rather than for the entire encryption/decryption operation. Suggested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-11-20crypto: chacha20-generic - refactor to allow varying number of roundsEric Biggers1-20/+20
In preparation for adding XChaCha12 support, rename/refactor chacha20-generic to support different numbers of rounds. The justification for needing XChaCha12 support is explained in more detail in the patch "crypto: chacha - add XChaCha12 support". The only difference between ChaCha{8,12,20} are the number of rounds itself; all other parts of the algorithm are the same. Therefore, remove the "20" from all definitions, structures, functions, files, etc. that will be shared by all ChaCha versions. Also make ->setkey() store the round count in the chacha_ctx (previously chacha20_ctx). The generic code then passes the round count through to chacha_block(). There will be a ->setkey() function for each explicitly allowed round count; the encrypt/decrypt functions will be the same. I decided not to do it the opposite way (same ->setkey() function for all round counts, with different encrypt/decrypt functions) because that would have required more boilerplate code in architecture-specific implementations of ChaCha and XChaCha. Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Acked-by: Martin Willi <martin@strongswan.org> Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-11-16crypto: aes-ce - Remove duplicate headerBrajeswar Ghosh1-1/+0
Remove asm/hwcap.h which is included more than once Signed-off-by: Brajeswar Ghosh <brajeswar.linux@gmail.com> Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-11-09crypto: arm/aes - add some hardening against cache-timing attacksEric Biggers2-10/+61
Make the ARM scalar AES implementation closer to constant-time by disabling interrupts and prefetching the tables into L1 cache. This is feasible because due to ARM's "free" rotations, the main tables are only 1024 bytes instead of the usual 4096 used by most AES implementations. On ARM Cortex-A7, the speed loss is only about 5%. The resulting code is still over twice as fast as aes_ti.c. Responsiveness is potentially a concern, but interrupts are only disabled for a single AES block. Note that even after these changes, the implementation still isn't necessarily guaranteed to be constant-time; see https://cr.yp.to/antiforgery/cachetiming-20050414.pdf for a discussion of the many difficulties involved in writing truly constant-time AES software. But it's valuable to make such attacks more difficult. Much of this patch is based on patches suggested by Ard Biesheuvel. Suggested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Eric Biggers <ebiggers@google.com> Reviewed-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-09-21crypto: arm/crc32 - avoid warning when compiling with ClangStefan Agner1-1/+1
The table id (second) argument to MODULE_DEVICE_TABLE is often referenced otherwise. This is not the case for CPU features. This leads to a warning when building the kernel with Clang: arch/arm/crypto/crc32-ce-glue.c:239:33: warning: variable 'crc32_cpu_feature' is not needed and will not be emitted [-Wunneeded-internal-declaration] static const struct cpu_feature crc32_cpu_feature[] = { ^ Avoid warnings by using __maybe_unused, similar to commit 1f318a8bafcf ("modules: mark __inittest/__exittest as __maybe_unused"). Fixes: 2a9faf8b7e43 ("crypto: arm/crc32 - enable module autoloading based on CPU feature bits") Signed-off-by: Stefan Agner <stefan@agner.ch> Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-09-04crypto: arm/chacha20 - faster 8-bit rotations and other optimizationsEric Biggers1-134/+143
Optimize ChaCha20 NEON performance by: - Implementing the 8-bit rotations using the 'vtbl.8' instruction. - Streamlining the part that adds the original state and XORs the data. - Making some other small tweaks. On ARM Cortex-A7, these optimizations improve ChaCha20 performance from about 12.08 cycles per byte to about 11.37 -- a 5.9% improvement. There is a tradeoff involved with the 'vtbl.8' rotation method since there is at least one CPU (Cortex-A53) where it's not fastest. But it seems to be a better default; see the added comment. Overall, this patch reduces Cortex-A53 performance by less than 0.5%. Signed-off-by: Eric Biggers <ebiggers@google.com> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-09-04crypto: arm/ghash-ce - implement support for 4-way aggregationArd Biesheuvel3-16/+131
Speed up the GHASH algorithm based on 64-bit polynomial multiplication by adding support for 4-way aggregation. This improves throughput by ~85% on Cortex-A53, from 1.7 cycles per byte to 0.9 cycles per byte. When combined with AES into GCM, throughput improves by ~25%, from 3.8 cycles per byte to 3.0 cycles per byte. Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-09-04crypto: speck - remove SpeckJason A. Donenfeld4-730/+0
These are unused, undesired, and have never actually been used by anybody. The original authors of this code have changed their mind about its inclusion. While originally proposed for disk encryption on low-end devices, the idea was discarded [1] in favor of something else before that could really get going. Therefore, this patch removes Speck. [1] https://marc.info/?l=linux-crypto-vger&m=153359499015659 Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Acked-by: Eric Biggers <ebiggers@google.com> Cc: stable@vger.kernel.org Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2018-08-23treewide: convert ISO_8859-1 text comments to utf-8Arnd Bergmann2-3/+3
Almost all files in the kernel are either plain text or UTF-8 encoded. A couple however are ISO_8859-1, usually just a few characters in a C comments, for historic reasons. This converts them all to UTF-8 for consistency. Link: http://lkml.kernel.org/r/20180724111600.4158975-1-arnd@arndb.de Signed-off-by: Arnd Bergmann <arnd@arndb.de> Acked-by: Simon Horman <horms@verge.net.au> [IPVS portion] Acked-by: Jonathan Cameron <Jonathan.Cameron@huawei.com> [IIO] Acked-by: Michael Ellerman <mpe@ellerman.id.au> [powerpc] Acked-by: Rob Herring <robh@kernel.org> Cc: Joe Perches <joe@perches.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Samuel Ortiz <sameo@linux.intel.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Rob Herring <robh+dt@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2018-08-03crypto: arm/chacha20 - always use vrev for 16-bit rotatesEric Biggers1-6/+4
The 4-way ChaCha20 NEON code implements 16-bit rotates with vrev32.16, but the one-way code (used on remainder blocks) implements it with vshl + vsri, which is slower. Switch the one-way code to vrev32.16 too. Signed-off-by: Eric Biggers <ebiggers@google.com> Acked-by: Ard Biesheuvel <ard.biesheuvel@linaro.org> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>