aboutsummaryrefslogtreecommitdiffstatshomepage
path: root/src/crypto (follow)
Commit message (Collapse)AuthorAgeFilesLines
* crypto: import zincJason A. Donenfeld2018-09-0342-984/+14670
|
* curve25519-arm: prefix immediates with #Jason A. Donenfeld2018-08-281-18/+18
|
* curve25519-arm: do not waste 32 bytes of stackJason A. Donenfeld2018-08-281-88/+88
|
* curve25519-arm: use ordinary prolog and epilogueSamuel Neves2018-08-281-18/+6
| | | | Signed-off-by: Samuel Neves <sneves@dei.uc.pt>
* curve25519-arm: add spaces after commasJason A. Donenfeld2018-08-281-2074/+2074
|
* curve25519-arm: cleanups from lkmlJason A. Donenfeld2018-08-281-33/+30
| | | | Suggested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
* curve25519-arm: reformatJason A. Donenfeld2018-08-281-2096/+2096
|
* curve25519-x86_64: let the compiler decide when/how to load constantsSamuel Neves2018-08-281-5/+2
| | | | Signed-off-by: Samuel Neves <sneves@dei.uc.pt>
* curve25519-hacl64: use formally verified C for comparisonsJason A. Donenfeld2018-08-281-6/+19
| | | | | | The previous code had been proved in Z3, but this new code from upstream KreMLin is directly generated from the F*, which is preferable. The assembly generated is identical.
* crypto: use unaligned helpersJason A. Donenfeld2018-08-287-48/+51
| | | | | | This is not useful for WireGuard, but for the general use case we probably want it this way, and the speed difference is mostly lost in the noise.
* curve25519-hacl64: correct u64_gte_maskSamuel Neves2018-08-071-3/+1
| | | | | | | | | | | | | | | | | | | Remove signed right shifts. Previously u64_gte_mask was only correct for x < 2^63. Z3 script proving correctness: >>> from z3 import * >>> >>> x = BitVec("x", 64) >>> y = BitVec("y", 64) >>> >>> t = LShR(x^((x^y)|((x-y)^y)), 63) - 1 >>> >>> prove(If(UGE(x, y), BitVecVal(-1, 64), BitVecVal(0, 64)) == t) proved Signed-off-by: Samuel Neves <sneves@dei.uc.pt>
* curve25519-hacl64: simplify u64_eq_maskSamuel Neves2018-08-071-8/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Avoid signed right shift. Z3 script showing equivalence: >>> from z3 import * >>> >>> x = BitVec("x", 64) >>> y = BitVec("y", 64) >>> >>> # Before ... x_ = ~(x ^ y) >>> x_ &= x_ << 32 >>> x_ &= x_ << 16 >>> x_ &= x_ << 8 >>> x_ &= x_ << 4 >>> x_ &= x_ << 2 >>> x_ &= x_ << 1 >>> x_ >>= 63 >>> >>> # After ... y_ = x ^ y >>> y_ = y_ | -y_ >>> y_ = LShR(y_, 63) - 1 >>> >>> prove(x_ == y_) proved Signed-off-by: Samuel Neves <sneves@dei.uc.pt>
* chacha20: use memmove in case buffers overlapJason A. Donenfeld2018-08-071-1/+1
| | | | Suggested-by: Samuel Neves <sneves@dei.uc.pt>
* curve25519-x86_64: avoid use of r12Jason A. Donenfeld2018-08-071-107/+107
| | | | | | | This causes problems with RAP and KERNEXEC for PaX, as r12 is a reserved register. Suggested-by: PaX Team <pageexec@freemail.hu>
* crypto: move simd context to specific typeJason A. Donenfeld2018-08-067-98/+106
| | | | Suggested-by: Andy Lutomirski <luto@kernel.org>
* main: add missing chacha20poly1305 headerJason A. Donenfeld2018-07-311-1/+0
|
* curve25519-x86_64: tighten reductions modulo 2^256-38Samuel Neves2018-07-281-21/+18
| | | | | | | | | At this stage the value if C[4] is at most ((2^256-1) + 38*(2^256-1)) / 2^256 = 38, so there is no need to use a wide multiplication. Change inspired by Andy Polyakov's OpenSSL implementation. Signed-off-by: Samuel Neves <sneves@dei.uc.pt>
* curve25519-x86_64: simplify the final reduction by adding 19 beforehandSamuel Neves2018-07-281-40/+26
| | | | | | | | | | | | | | | | | | | | Correctness can be quickly verified with the following z3py script: >>> from z3 import * >>> x = BitVec("x", 256) # any 256-bit value >>> ref = URem(x, 2**255 - 19) # correct value >>> t = Extract(255, 255, x); x &= 2**255 - 1; # btrq $63, %3 >>> u = If(t != 0, BitVecVal(38, 256), BitVecVal(19, 256)) # cmovncl %k5, %k4 >>> x += u # addq %4, %0; adcq $0, %1; adcq $0, %2; adcq $0, %3; >>> t = Extract(255, 255, x); x &= 2**255 - 1; # btrq $63, %3 >>> u = If(t != 0, BitVecVal(0, 256), BitVecVal(19, 256)) # cmovncl %k5, %k4 >>> x -= u # subq %4, %0; sbbq $0, %1; sbbq $0, %2; sbbq $0, %3; >>> prove(x == ref) proved Change inspired by Andy Polyakov's OpenSSL implementation. Signed-off-by: Samuel Neves <sneves@dei.uc.pt>
* curve25519-x86_64: tighten the x25519 assemblySamuel Neves2018-07-281-3/+3
| | | | | | | | | | The wide multiplication by 38 in mul_a24_eltfp25519_1w is redundant: (2^256-1) * 121666 / 2^256 is at most 121665, and therefore a 64-bit multiplication can never overflow. Change inspired by Andy Polyakov's OpenSSL implementation. Signed-off-by: Samuel Neves <sneves@dei.uc.pt>
* simd: add missing headerJason A. Donenfeld2018-06-221-0/+1
| | | | Suggested-by: Shlomi Steinberg <shlomi@shlomisteinberg.com>
* poly1305: give linker the correct constant data section sizeJason A. Donenfeld2018-06-221-1/+1
| | | | | | Otherwise these constants will be merged wrong or excluded, and we'll wind up with wrong calculations. While bfd (the normal kernel linker) doesn't seem to mind, recent versions of gold do bad things.
* poly1305: add missing string.h headerJason A. Donenfeld2018-06-201-0/+1
| | | | Reported-by: Peter Korsgaard <peter@korsgaard.com>
* simd: no need to restore fpu state when no preemptionJason A. Donenfeld2018-06-171-0/+2
|
* simd: encapsulate fpu amortization into nice functionsJason A. Donenfeld2018-06-173-47/+66
|
* chacha20poly1305: use slow crypto on -rt kernels on arm tooJason A. Donenfeld2018-06-141-1/+1
|
* chacha20poly1305: use slow crypto on -rt kernelsJason A. Donenfeld2018-06-131-1/+1
| | | | | | | | | | | | | | In rt kernels, spinlocks call schedule(), which means preemption can't be disabled. The FPU disables preemption. Hence, we can either restructure things to move the calls to kernel_fpu_begin/end to be really close to the actual crypto routines, or we can do the slower lazier solution of just not using the FPU at all on -rt kernels. This patch goes with the latter lazy solution. The reason why we don't place the calls to kernel_fpu_begin/end close to the crypto routines in the first place is that they're very expensive, as it usually involves a call to XSAVE. So on sane kernels, we benefit from only having to call it once.
* chacha20: add missing include to headerJason A. Donenfeld2018-06-021-0/+1
|
* poly1305: mips: compute S on flyRené van Dorst2018-05-311-31/+22
| | | | | | This reduces memory access and the total opaque size. Signed-off-by: René van Dorst <opensource@vdorst.com>
* crypto: consistent constificationJason A. Donenfeld2018-05-316-23/+23
|
* chacha20poly1305: combine stack variables into unionJason A. Donenfeld2018-05-311-54/+53
|
* chacha20poly1305: split up into separate filesJason A. Donenfeld2018-05-316-614/+724
|
* curve25519: x86_64: make symbol staticJason A. Donenfeld2018-05-291-2/+2
|
* curve25519: x86_64: satisfy sparseJason A. Donenfeld2018-05-291-260/+260
|
* chacha20poly1305: add mips32 implementationRené van Dorst2018-05-183-5/+912
| | | | Signed-off-by: René van Dorst <opensource@vdorst.com>
* chacha20poly1305: make gcc 8.1 happySamuel Neves2018-05-131-2/+2
| | | | | | | | | | | | | | | GCC 8.1 does not know about the invariant `0 <= ctx->num < POLY1305_BLOCK_SIZE`. This results in a warning that `memcpy(ctx->data + num, inp, len);` may overflow the `data` field, which is correct for arbitrary values of `num`. To make the invariant explicit we ensure that `num` is in the required range. An alternative would be to change `ctx->num` to a 4-bit bitfield at the point of declaration. This changes the code from `test ebp, ebp; jz end` to `and ebp, 15; jz end`, which have identical performance characteristics. Signed-off-by: Samuel Neves <sneves@dei.uc.pt>
* poly1305: do not place constants in different sectionsJason A. Donenfeld2018-04-181-14/+1
| | | | | | | We're referencing these constants as one contiguous blob, so if there's any merging that goes on with other constants elsewhere (such as the kernel's current poly1305 implementation that we hope to replace), then these will be reordered and have the wrong values.
* blake2s: remove unused helperJason A. Donenfeld2018-04-161-5/+0
|
* chacha20poly1305: put magic constant behind macroJason A. Donenfeld2018-04-051-2/+4
|
* curve25519: precomp const correctnessJason A. Donenfeld2018-03-091-24/+22
|
* curve25519: memzero in batchesJason A. Donenfeld2018-03-091-140/+124
|
* curve25519: use cmov instead of xor for cswapJason A. Donenfeld2018-03-091-12/+39
| | | | Also add cselect optimization.
* curve25519: use precomp implementation instead of sandy2xJason A. Donenfeld2018-03-093-3437/+2070
| | | | It's faster and doesn't use the FPU.
* crypto: read only after initJason A. Donenfeld2018-03-024-10/+11
|
* blake2s: use union instead of castingJason A. Donenfeld2018-02-141-18/+16
| | | | | This deals with alignment more easily and also helps squelch a clang-analyzer warning.
* curve25519: replace fiat64 with faster hacl64Jason A. Donenfeld2018-02-013-470/+883
| | | | | This reverts commit da4ff396cc5d5e0ff21f9ecbc2f951c048c63fff and adds some optimizations to hacl64.
* curve25519: replace hacl64 with fiat64Jason A. Donenfeld2018-02-013-871/+470
| | | | | | | | | | For now, it's faster: hacl64: 109782 cycles per call fiat64: 108984 cycles per call It's quite possible this commit will be reverted with nice changes from INRIA, though.
* chacha20poly1305: better buffer alignmentJason A. Donenfeld2018-01-301-9/+8
|
* chacha20poly1305: use existing rol32 functionJason A. Donenfeld2018-01-301-9/+4
|
* poly1305: add poly-specific self-testsJason A. Donenfeld2018-01-192-0/+2
|
* curve25519-fiat32: uninline certain functionsJason A. Donenfeld2018-01-181-4/+4
| | | | | | | | | | | While this has a negative performance impact on x86_64, it has a positive performance impact on smaller machines, which is where we're actually using this code. For example, an A53: Before: fiat32: 228605 cycles per call After: fiat32: 188307 cycles per call