diff options
author | 2017-12-11 05:48:01 +0000 | |
---|---|---|
committer | 2017-12-11 05:48:01 +0000 | |
commit | 3f624d5f689b2964adb044dbb5971a0310c8869d (patch) | |
tree | 54f5f3ab8d249de3ffc2608e6fc0375d50c09ee5 /lib/libcrypto/aes | |
parent | In uvm Chuck decided backing store would not be allocated proactively (diff) | |
download | wireguard-openbsd-3f624d5f689b2964adb044dbb5971a0310c8869d.tar.xz wireguard-openbsd-3f624d5f689b2964adb044dbb5971a0310c8869d.zip |
http://repzret.org/p/repzret/
My read of this: Long time ago (Think Conan, not dinasaurs) during the race
to make speedier processors, a cpu vendor built a pipeline with a bad stall,
and proposed a tremendously hasky workaround. A wizard adopted this into his
perl scroll, and failed to reflect later when no compiler adopted the practice.
This relic remains at the tail end of some functions in OpenSSL as
".byte 0xf3,0xc3". Banish it straight to hell.
ok mlarkin, others also stared blankly
Diffstat (limited to 'lib/libcrypto/aes')
-rwxr-xr-x | lib/libcrypto/aes/asm/aes-x86_64.pl | 10 |
1 files changed, 5 insertions, 5 deletions
diff --git a/lib/libcrypto/aes/asm/aes-x86_64.pl b/lib/libcrypto/aes/asm/aes-x86_64.pl index c37fd55648b..9072f603a94 100755 --- a/lib/libcrypto/aes/asm/aes-x86_64.pl +++ b/lib/libcrypto/aes/asm/aes-x86_64.pl @@ -352,7 +352,7 @@ ___ ___ } $code.=<<___; - .byte 0xf3,0xc3 # rep ret + retq .size _x86_64_AES_encrypt,.-_x86_64_AES_encrypt ___ @@ -580,7 +580,7 @@ $code.=<<___; xor 4($key),$s1 xor 8($key),$s2 xor 12($key),$s3 - .byte 0xf3,0xc3 # rep ret + retq .size _x86_64_AES_encrypt_compact,.-_x86_64_AES_encrypt_compact ___ @@ -925,7 +925,7 @@ ___ ___ } $code.=<<___; - .byte 0xf3,0xc3 # rep ret + retq .size _x86_64_AES_decrypt,.-_x86_64_AES_decrypt ___ @@ -1179,7 +1179,7 @@ $code.=<<___; xor 4($key),$s1 xor 8($key),$s2 xor 12($key),$s3 - .byte 0xf3,0xc3 # rep ret + retq .size _x86_64_AES_decrypt_compact,.-_x86_64_AES_decrypt_compact ___ @@ -1496,7 +1496,7 @@ $code.=<<___; .Lbadpointer: mov \$-1,%rax .Lexit: - .byte 0xf3,0xc3 # rep ret + retq .size _x86_64_AES_set_encrypt_key,.-_x86_64_AES_set_encrypt_key ___ |