aboutsummaryrefslogtreecommitdiffstatshomepage
path: root/arch/x86/entry/entry_64.S
diff options
context:
space:
mode:
authorPeter Zijlstra <peterz@infradead.org>2022-06-14 23:16:03 +0200
committerBorislav Petkov <bp@suse.de>2022-06-27 10:34:00 +0200
commita09a6e2399ba0595c3042b3164f3ca68a3cff33e (patch)
treebf16062820b967a0b9c59169adf156eec8ed55a4 /arch/x86/entry/entry_64.S
parentx86/bugs: Do IBPB fallback check only once (diff)
downloadwireguard-linux-a09a6e2399ba0595c3042b3164f3ca68a3cff33e.tar.xz
wireguard-linux-a09a6e2399ba0595c3042b3164f3ca68a3cff33e.zip
objtool: Add entry UNRET validation
Since entry asm is tricky, add a validation pass that ensures the retbleed mitigation has been done before the first actual RET instruction. Entry points are those that either have UNWIND_HINT_ENTRY, which acts as UNWIND_HINT_EMPTY but marks the instruction as an entry point, or those that have UWIND_HINT_IRET_REGS at +0. This is basically a variant of validate_branch() that is intra-function and it will simply follow all branches from marked entry points and ensures that all paths lead to ANNOTATE_UNRET_END. If a path hits RET or an indirection the path is a fail and will be reported. There are 3 ANNOTATE_UNRET_END instances: - UNTRAIN_RET itself - exception from-kernel; this path doesn't need UNTRAIN_RET - all early exceptions; these also don't need UNTRAIN_RET Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Josh Poimboeuf <jpoimboe@kernel.org> Signed-off-by: Borislav Petkov <bp@suse.de>
Diffstat (limited to 'arch/x86/entry/entry_64.S')
-rw-r--r--arch/x86/entry/entry_64.S3
1 files changed, 2 insertions, 1 deletions
diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
index 0c88802e1155..65e3b8b7cbe5 100644
--- a/arch/x86/entry/entry_64.S
+++ b/arch/x86/entry/entry_64.S
@@ -85,7 +85,7 @@
*/
SYM_CODE_START(entry_SYSCALL_64)
- UNWIND_HINT_EMPTY
+ UNWIND_HINT_ENTRY
ENDBR
swapgs
@@ -1095,6 +1095,7 @@ SYM_CODE_START_LOCAL(error_entry)
.Lerror_entry_done_lfence:
FENCE_SWAPGS_KERNEL_ENTRY
leaq 8(%rsp), %rax /* return pt_regs pointer */
+ ANNOTATE_UNRET_END
RET
.Lbstep_iret: