path: root/src/crypto/zinc/chacha20/chacha20-arm.pl
diff options
authorSamuel Neves <sneves@dei.uc.pt>2019-05-04 17:14:09 +0100
committerJason A. Donenfeld <Jason@zx2c4.com>2019-05-29 01:23:24 +0200
commit22bbac4d2ffb62f28b0483f05f24a0f41639b787 (patch)
tree262a0864dc669ac71dd27264f119c145799c4bc0 /src/crypto/zinc/chacha20/chacha20-arm.pl
parentqemu: do not check for alignment with ubsan (diff)
blake2s,chacha: latency tweak
In every odd-numbered round, instead of operating over the state x00 x01 x02 x03 x05 x06 x07 x04 x10 x11 x08 x09 x15 x12 x13 x14 we operate over the rotated state x03 x00 x01 x02 x04 x05 x06 x07 x09 x10 x11 x08 x14 x15 x12 x13 The advantage here is that this requires no changes to the 'x04 x05 x06 x07' row, which is in the critical path. This results in a noticeable latency improvement of roughly R cycles, for R diagonal rounds in the primitive. In the case of BLAKE2s, which I also moved from requiring AVX to only requiring SSSE3, we save approximately 30 cycles per compression function call on Haswell and Skylake. In other words, this is an improvement of ~0.6 cpb. This idea was pointed out to me by Shunsuke Shimizu, though it appears to have been around for longer. Signed-off-by: Samuel Neves <sneves@dei.uc.pt>
Diffstat (limited to '')
1 files changed, 3 insertions, 3 deletions
diff --git a/src/crypto/zinc/chacha20/chacha20-arm.pl b/src/crypto/zinc/chacha20/chacha20-arm.pl
index 6a7d62c..6785383 100644
--- a/src/crypto/zinc/chacha20/chacha20-arm.pl
+++ b/src/crypto/zinc/chacha20/chacha20-arm.pl
@@ -686,9 +686,9 @@ my ($a,$b,$c,$d,$t)=@_;
"&vshr_u32 ($b,$t,25)",
"&vsli_32 ($b,$t,7)",
- "&vext_8 ($c,$c,$c,8)",
- "&vext_8 ($b,$b,$b,$odd?12:4)",
- "&vext_8 ($d,$d,$d,$odd?4:12)"
+ "&vext_8 ($a,$a,$a,$odd?4:12)",
+ "&vext_8 ($d,$d,$d,8)",
+ "&vext_8 ($c,$c,$c,$odd?12:4)"