summaryrefslogtreecommitdiffstats
path: root/sys/netinet/ip_ipcomp.c
diff options
context:
space:
mode:
authorguenther <guenther@openbsd.org>2018-07-12 14:11:11 +0000
committerguenther <guenther@openbsd.org>2018-07-12 14:11:11 +0000
commit1fc8fad1ef00427ff55d700df3f3dfdb82455f63 (patch)
tree0e357d177b40c5a738fc261b53ca00dc615b2651 /sys/netinet/ip_ipcomp.c
parentfix Test 7.1 after main.c rev. 1.37; (diff)
downloadwireguard-openbsd-1fc8fad1ef00427ff55d700df3f3dfdb82455f63.tar.xz
wireguard-openbsd-1fc8fad1ef00427ff55d700df3f3dfdb82455f63.zip
Reorganize the Meltdown entry and exit trampolines for syscall and
traps so that the "mov %rax,%cr3" is followed by an infinite loop which is avoided because the mapping of the code being executed is changed. This means the sysretq/iretq isn't even present in that flow of instructions in the kernel mapping, so userspace code can't be speculatively reached on the kernel mapping and totally eliminates the conditional jump over the the %cr3 change that supported CPUs without the Meltdown vulnerability. The return paths were probably vulnerable to Spectre v1 (and v1.1/1.2) style attacks, speculatively executing user code post-system-call with the kernel mappings, thus creating cache/TLB/etc side-effects. Would like to apply this technique to the interrupt stubs too, but I'm hitting a bug in clang's assembler which misaligns the code and symbols. While here, when on a CPU not vulnerable to Meltdown, codepatch out the unnecessary bits in cpu_switchto(). Inspiration from sf@, refined over dinner with theo ok mlarkin@ deraadt@
Diffstat (limited to 'sys/netinet/ip_ipcomp.c')
0 files changed, 0 insertions, 0 deletions