aboutsummaryrefslogtreecommitdiffstats
path: root/tools/objtool (follow)
AgeCommit message (Collapse)AuthorFilesLines
2021-06-28Merge tags 'objtool-urgent-2021-06-28' and 'objtool-core-2021-06-28' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tipLinus Torvalds8-51/+136
Pull objtool fix and updates from Ingo Molnar: "An ELF format fix for a section flags mismatch bug that breaks kernel tooling such as kpatch-build. The biggest change in this cycle is the new code to handle and rewrite variable sized jump labels - which results in slightly tighter code generation in hot paths, through the use of short(er) NOPs. Also a number of cleanups and fixes, and a change to the generic include/linux/compiler.h to handle a s390 GCC quirk" * tag 'objtool-urgent-2021-06-28' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: objtool: Don't make .altinstructions writable * tag 'objtool-core-2021-06-28' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: objtool: Improve reloc hash size guestimate instrumentation.h: Avoid using inline asm operand modifiers compiler.h: Avoid using inline asm operand modifiers kbuild: Fix objtool dependency for 'OBJECT_FILES_NON_STANDARD_<obj> := n' objtool: Reflow handle_jump_alt() jump_label/x86: Remove unused JUMP_LABEL_NOP_SIZE jump_label, x86: Allow short NOPs objtool: Provide stats for jump_labels objtool: Rewrite jump_label instructions objtool: Decode jump_entry::key addend jump_label, x86: Emit short JMP jump_label: Free jump_entry::key bit1 for build use jump_label, x86: Add variable length patching support jump_label, x86: Introduce jump_entry_size() jump_label, x86: Improve error when we fail expected text jump_label, x86: Factor out the __jump_table generation jump_label, x86: Strip ASM jump_label support x86, objtool: Dont exclude arch/x86/realmode/ objtool: Rewrite hashtable sizing
2021-06-24objtool: Don't make .altinstructions writableJosh Poimboeuf1-1/+1
When objtool creates the .altinstructions section, it sets the SHF_WRITE flag to make the section writable -- unless the section had already been previously created by the kernel. The mismatch between kernel-created and objtool-created section flags can cause failures with external tooling (kpatch-build). And the section doesn't need to be writable anyway. Make the section flags consistent with the kernel's. Fixes: 9bc0bb50727c ("objtool/x86: Rewrite retpoline thunk calls") Reported-by: Joe Lawrence <joe.lawrence@redhat.com> Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/6c284ae89717889ea136f9f0064d914cd8329d31.1624462939.git.jpoimboe@redhat.com
2021-06-14objtool: Improve reloc hash size guestimatePeter Zijlstra2-7/+5
Nathan reported that LLVM ThinLTO builds have a performance regression with commit 25cf0d8aa2a3 ("objtool: Rewrite hashtable sizing"). Sami was quick to note that this is due to their use of -ffunction-sections. As a result the .text section is small and basing the number of relocs off of that no longer works. Instead have read_sections() compute the sum of all SHF_EXECINSTR sections and use that. Fixes: 25cf0d8aa2a3 ("objtool: Rewrite hashtable sizing") Reported-by: Nathan Chancellor <nathan@kernel.org> Debugged-by: Sami Tolvanen <samitolvanen@google.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Tested-by: Nathan Chancellor <nathan@kernel.org> Link: https://lkml.kernel.org/r/YMJpGLuGNsGtA5JJ@hirez.programming.kicks-ass.net
2021-06-11objtool: Only rewrite unconditional retpoline thunk callsPeter Zijlstra1-0/+4
It turns out that the compilers generate conditional branches to the retpoline thunks like: 5d5: 0f 85 00 00 00 00 jne 5db <cpuidle_reflect+0x22> 5d7: R_X86_64_PLT32 __x86_indirect_thunk_r11-0x4 while the rewrite can only handle JMP/CALL to the thunks. The result is the alternative wrecking the code. Make sure to skip writing the alternatives for conditional branches. Fixes: 9bc0bb50727c ("objtool/x86: Rewrite retpoline thunk calls") Reported-by: Lukasz Majczak <lma@semihalf.com> Reported-by: Nathan Chancellor <nathan@kernel.org> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Tested-by: Nathan Chancellor <nathan@kernel.org>
2021-06-10objtool: Fix .symtab_shndx handling for elf_create_undef_symbol()Peter Zijlstra1-1/+24
When an ELF object uses extended symbol section indexes (IOW it has a .symtab_shndx section), these must be kept in sync with the regular symbol table (.symtab). So for every new symbol we emit, make sure to also emit a .symtab_shndx value to keep the arrays of equal size. Note: since we're writing an UNDEF symbol, most GElf_Sym fields will be 0 and we can repurpose one (st_size) to host the 0 for the xshndx value. Fixes: 2f2f7e47f052 ("objtool: Add elf_create_undef_symbol()") Reported-by: Nick Desaulniers <ndesaulniers@google.com> Suggested-by: Fangrui Song <maskray@google.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Tested-by: Nick Desaulniers <ndesaulniers@google.com> Link: https://lkml.kernel.org/r/YL3q1qFO9QIRL/BA@hirez.programming.kicks-ass.net
2021-05-14objtool: Reflow handle_jump_alt()Peter Zijlstra1-11/+11
Miroslav figured the code flow in handle_jump_alt() was sub-optimal with that goto. Reflow the code to make it clearer. Reported-by: Miroslav Benes <mbenes@suse.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/YJ00lgslY+IpA/rL@hirez.programming.kicks-ass.net
2021-05-12objtool/x86: Fix elf_add_alternative() endiannessVasily Gorbik1-1/+2
Currently x86 kernel cross-compiled on big endian system fails at boot with: kernel BUG at arch/x86/kernel/alternative.c:258! Corresponding bug condition look like the following: BUG_ON(feature >= (NCAPINTS + NBUGINTS) * 32); Fix that by converting alternative feature/cpuid to target endianness. Fixes: 9bc0bb50727c ("objtool/x86: Rewrite retpoline thunk calls") Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: https://lore.kernel.org/r/patch-2.thread-6c9df9.git-6c9df9a8098d.your-ad-here.call-01620841104-ext-2554@work.hours
2021-05-12objtool: Fix elf_create_undef_symbol() endiannessVasily Gorbik1-0/+1
Currently x86 cross-compilation fails on big endian system with: x86_64-cross-ld: init/main.o: invalid string offset 488112128 >= 6229 for section `.strtab' Mark new ELF data in elf_create_undef_symbol() as symbol, so that libelf does endianness handling correctly. Fixes: 2f2f7e47f052 ("objtool: Add elf_create_undef_symbol()") Signed-off-by: Vasily Gorbik <gor@linux.ibm.com> Signed-off-by: Ingo Molnar <mingo@kernel.org> Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: https://lore.kernel.org/r/patch-1.thread-6c9df9.git-d39264656387.your-ad-here.call-01620841104-ext-2554@work.hours
2021-05-12objtool: Provide stats for jump_labelsPeter Zijlstra2-2/+23
Add objtool --stats to count the jump_label sites it encounters. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20210506194158.153101906@infradead.org
2021-05-12objtool: Rewrite jump_label instructionsPeter Zijlstra1-0/+14
When a jump_entry::key has bit1 set, rewrite the instruction to be a NOP. This allows the compiler/assembler to emit JMP (and thus decide on which encoding to use). Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20210506194158.091028792@infradead.org
2021-05-12objtool: Decode jump_entry::key addendPeter Zijlstra3-0/+16
Teach objtool about the the low bits in the struct static_key pointer. That is, the low two bits of @key in: struct jump_entry { s32 code; s32 target; long key; } as found in the __jump_table section. Since @key has a relocation to the variable (to be resolved by the linker), the low two bits will be reflected in the relocation's addend. As such, find the reloc and store the addend, such that we can access these bits. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20210506194158.028024143@infradead.org
2021-05-12objtool: Rewrite hashtable sizingPeter Zijlstra2-47/+83
Currently objtool has 5 hashtables and sizes them 16 or 20 bits depending on the --vmlinux argument. However, a single side doesn't really work well for the 5 tables, which among them, cover 3 different uses. Also, while vmlinux is larger, there is still a very wide difference between a defconfig and allyesconfig build, which again isn't optimally covered by a single size. Another aspect is the cost of elf_hash_init(), which for large tables dominates the runtime for small input files. It turns out that all it does it assign NULL, something that is required when using malloc(). However, when we allocate memory using mmap(), we're guaranteed to get zero filled pages. Therefore, rewrite the whole thing to: 1) use more dynamic sized tables, depending on the input file, 2) avoid the need for elf_hash_init() entirely by using mmap(). This speeds up a regular kernel build (100s to 98s for x86_64-defconfig), and potentially dramatically speeds up vmlinux processing. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lore.kernel.org/r/20210506194157.452881700@infradead.org
2021-04-28Merge tag 'objtool-core-2021-04-28' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tipLinus Torvalds8-166/+299
Pull objtool updates from Ingo Molnar: - Standardize the crypto asm code so that it looks like compiler- generated code to objtool - so that it can understand it. This enables unwinding from crypto asm code - and also fixes the last known remaining objtool warnings for LTO and more. - x86 decoder fixes: clean up and fix the decoder, and also extend it a bit - Misc fixes and cleanups * tag 'objtool-core-2021-04-28' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (24 commits) x86/crypto: Enable objtool in crypto code x86/crypto/sha512-ssse3: Standardize stack alignment prologue x86/crypto/sha512-avx2: Standardize stack alignment prologue x86/crypto/sha512-avx: Standardize stack alignment prologue x86/crypto/sha256-avx2: Standardize stack alignment prologue x86/crypto/sha1_avx2: Standardize stack alignment prologue x86/crypto/sha_ni: Standardize stack alignment prologue x86/crypto/crc32c-pcl-intel: Standardize jump table x86/crypto/camellia-aesni-avx2: Unconditionally allocate stack buffer x86/crypto/aesni-intel_avx: Standardize stack alignment prologue x86/crypto/aesni-intel_avx: Fix register usage comments x86/crypto/aesni-intel_avx: Remove unused macros objtool: Support asm jump tables objtool: Parse options from OBJTOOL_ARGS objtool: Collate parse_options() users objtool: Add --backup objtool,x86: More ModRM sugar objtool,x86: Rewrite ADD/SUB/AND objtool,x86: Support %riz encodings objtool,x86: Simplify register decode ...
2021-04-19objtool: Support asm jump tablesJosh Poimboeuf1-1/+13
Objtool detection of asm jump tables would normally just work, except for the fact that asm retpolines use alternatives. Objtool thinks the alternative code path (a jump to the retpoline) is a sibling call. Don't treat alternative indirect branches as sibling calls when the original instruction has a jump table. Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com> Tested-by: Ard Biesheuvel <ardb@kernel.org> Acked-by: Ard Biesheuvel <ardb@kernel.org> Tested-by: Sami Tolvanen <samitolvanen@google.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Herbert Xu <herbert@gondor.apana.org.au> Link: https://lore.kernel.org/r/460cf4dc675d64e1124146562cabd2c05aa322e8.1614182415.git.jpoimboe@redhat.com
2021-04-02objtool/x86: Rewrite retpoline thunk callsPeter Zijlstra1-0/+117
When the compiler emits: "CALL __x86_indirect_thunk_\reg" for an indirect call, have objtool rewrite it to: ALTERNATIVE "call __x86_indirect_thunk_\reg", "call *%reg", ALT_NOT(X86_FEATURE_RETPOLINE) Additionally, in order to not emit endless identical .altinst_replacement chunks, use a global symbol for them, see __x86_indirect_alt_*. This also avoids objtool from having to do code generation. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Borislav Petkov <bp@suse.de> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Miroslav Benes <mbenes@suse.cz> Link: https://lkml.kernel.org/r/20210326151300.320177914@infradead.org
2021-04-02objtool: Skip magical retpoline .altinstr_replacementPeter Zijlstra1-1/+11
When the .altinstr_replacement is a retpoline, skip the alternative. We already special case retpolines anyway. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Borislav Petkov <bp@suse.de> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Miroslav Benes <mbenes@suse.cz> Link: https://lkml.kernel.org/r/20210326151300.259429287@infradead.org
2021-04-02objtool: Cache instruction relocsPeter Zijlstra2-6/+23
Track the reloc of instructions in the new instruction->reloc field to avoid having to look them up again later. ( Technically x86 instructions can have two relocations, but not jumps and calls, for which we're using this. ) Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Borislav Petkov <bp@suse.de> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Miroslav Benes <mbenes@suse.cz> Link: https://lkml.kernel.org/r/20210326151300.195441549@infradead.org
2021-04-02objtool: Keep track of retpoline call sitesPeter Zijlstra5-6/+34
Provide infrastructure for architectures to rewrite/augment compiler generated retpoline calls. Similar to what we do for static_call()s, keep track of the instructions that are retpoline calls. Use the same list_head, since a retpoline call cannot also be a static_call. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Borislav Petkov <bp@suse.de> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Miroslav Benes <mbenes@suse.cz> Link: https://lkml.kernel.org/r/20210326151300.130805730@infradead.org
2021-04-02objtool: Add elf_create_undef_symbol()Peter Zijlstra2-0/+61
Allow objtool to create undefined symbols; this allows creating relocations to symbols not currently in the symbol table. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Borislav Petkov <bp@suse.de> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Miroslav Benes <mbenes@suse.cz> Link: https://lkml.kernel.org/r/20210326151300.064743095@infradead.org
2021-04-02objtool: Extract elf_symbol_add()Peter Zijlstra1-25/+31
Create a common helper to add symbols. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Borislav Petkov <bp@suse.de> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Miroslav Benes <mbenes@suse.cz> Link: https://lkml.kernel.org/r/20210326151300.003468981@infradead.org
2021-04-02objtool: Extract elf_strtab_concat()Peter Zijlstra1-22/+38
Create a common helper to append strings to a strtab. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Borislav Petkov <bp@suse.de> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Miroslav Benes <mbenes@suse.cz> Link: https://lkml.kernel.org/r/20210326151259.941474004@infradead.org
2021-04-02objtool: Create reloc sections implicitlyPeter Zijlstra4-10/+8
Have elf_add_reloc() create the relocation section implicitly. Suggested-by: Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Borislav Petkov <bp@suse.de> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Miroslav Benes <mbenes@suse.cz> Link: https://lkml.kernel.org/r/20210326151259.880174448@infradead.org
2021-04-02objtool: Add elf_create_reloc() helperPeter Zijlstra4-119/+85
We have 4 instances of adding a relocation. Create a common helper to avoid growing even more. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Borislav Petkov <bp@suse.de> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Miroslav Benes <mbenes@suse.cz> Link: https://lkml.kernel.org/r/20210326151259.817438847@infradead.org
2021-04-02objtool: Rework the elf_rebuild_reloc_section() logicPeter Zijlstra4-16/+14
Instead of manually calling elf_rebuild_reloc_section() on sections we've called elf_add_reloc() on, have elf_write() DTRT. This makes it easier to add random relocations in places without carefully tracking when we're done and need to flush what section. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Borislav Petkov <bp@suse.de> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Miroslav Benes <mbenes@suse.cz> Link: https://lkml.kernel.org/r/20210326151259.754213408@infradead.org
2021-04-02objtool: Fix static_call list generationPeter Zijlstra1-5/+12
Currently, objtool generates tail call entries in add_jump_destination() but waits until validate_branch() to generate the regular call entries. Move these to add_call_destination() for consistency. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Borislav Petkov <bp@suse.de> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Miroslav Benes <mbenes@suse.cz> Link: https://lkml.kernel.org/r/20210326151259.691529901@infradead.org
2021-04-02objtool: Handle per arch retpoline namingPeter Zijlstra3-2/+14
The __x86_indirect_ naming is obviously not generic. Shorten to allow matching some additional magic names later. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Borislav Petkov <bp@suse.de> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Miroslav Benes <mbenes@suse.cz> Link: https://lkml.kernel.org/r/20210326151259.630296706@infradead.org
2021-04-02objtool: Correctly handle retpoline thunk callsPeter Zijlstra1-0/+12
Just like JMP handling, convert a direct CALL to a retpoline thunk into a retpoline safe indirect CALL. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Borislav Petkov <bp@suse.de> Signed-off-by: Ingo Molnar <mingo@kernel.org> Reviewed-by: Miroslav Benes <mbenes@suse.cz> Link: https://lkml.kernel.org/r/20210326151259.567568238@infradead.org
2021-04-02x86/retpoline: Simplify retpolinesPeter Zijlstra1-2/+1
Due to: c9c324dc22aa ("objtool: Support stack layout changes in alternatives") it is now possible to simplify the retpolines. Currently our retpolines consist of 2 symbols: - __x86_indirect_thunk_\reg: the compiler target - __x86_retpoline_\reg: the actual retpoline. Both are consecutive in code and aligned such that for any one register they both live in the same cacheline: 0000000000000000 <__x86_indirect_thunk_rax>: 0: ff e0 jmpq *%rax 2: 90 nop 3: 90 nop 4: 90 nop 0000000000000005 <__x86_retpoline_rax>: 5: e8 07 00 00 00 callq 11 <__x86_retpoline_rax+0xc> a: f3 90 pause c: 0f ae e8 lfence f: eb f9 jmp a <__x86_retpoline_rax+0x5> 11: 48 89 04 24 mov %rax,(%rsp) 15: c3 retq 16: 66 2e 0f 1f 84 00 00 00 00 00 nopw %cs:0x0(%rax,%rax,1) The thunk is an alternative_2, where one option is a JMP to the retpoline. This was done so that objtool didn't need to deal with alternatives with stack ops. But that problem has been solved, so now it is possible to fold the entire retpoline into the alternative to simplify and consolidate unused bytes: 0000000000000000 <__x86_indirect_thunk_rax>: 0: ff e0 jmpq *%rax 2: 90 nop 3: 90 nop 4: 90 nop 5: 90 nop 6: 90 nop 7: 90 nop 8: 90 nop 9: 90 nop a: 90 nop b: 90 nop c: 90 nop d: 90 nop e: 90 nop f: 90 nop 10: 90 nop 11: 66 66 2e 0f 1f 84 00 00 00 00 00 data16 nopw %cs:0x0(%rax,%rax,1) 1c: 0f 1f 40 00 nopl 0x0(%rax) Notice that since the longest alternative sequence is now: 0: e8 07 00 00 00 callq c <.altinstr_replacement+0xc> 5: f3 90 pause 7: 0f ae e8 lfence a: eb f9 jmp 5 <.altinstr_replacement+0x5> c: 48 89 04 24 mov %rax,(%rsp) 10: c3 retq 17 bytes, we have 15 bytes NOP at the end of our 32 byte slot. (IOW, if we can shrink the retpoline by 1 byte we can pack it more densely). [ bp: Massage commit message. ] Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Borislav Petkov <bp@suse.de> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lkml.kernel.org/r/20210326151259.506071949@infradead.org
2021-04-02x86/alternatives: Optimize optimize_nops()Peter Zijlstra1-1/+1
Currently, optimize_nops() scans to see if the alternative starts with NOPs. However, the emit pattern is: 141: \oldinstr 142: .skip (len-(142b-141b)), 0x90 That is, when 'oldinstr' is short, the tail is padded with NOPs. This case never gets optimized. Rewrite optimize_nops() to replace any trailing string of NOPs inside the alternative to larger NOPs. Also run it irrespective of patching, replacing NOPs in both the original and replaced code. A direct consequence is that 'padlen' becomes superfluous, so remove it. [ bp: - Adjust commit message - remove a stale comment about needing to pad - add a comment in optimize_nops() - exit early if the NOP verif. loop catches a mismatch - function should not not add NOPs in that case - fix the "optimized NOPs" offsets output ] Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Borislav Petkov <bp@suse.de> Signed-off-by: Ingo Molnar <mingo@kernel.org> Link: https://lkml.kernel.org/r/20210326151259.442992235@infradead.org
2021-04-02Merge branch 'x86/cpu' into WIP.x86/core, to merge the NOP changes & resolve a semantic conflictIngo Molnar2-5/+9
Conflict-merge this main commit in essence: a89dfde3dc3c: ("x86: Remove dynamic NOP selection") With this upstream commit: b90829704780: ("bpf: Use NOP_ATOMIC5 instead of emit_nops(&prog, 5) for BPF_TRAMP_F_CALL_ORIG") Semantic merge conflict: arch/x86/net/bpf_jit_comp.c - memcpy(prog, ideal_nops[NOP_ATOMIC5], X86_PATCH_SIZE); + memcpy(prog, x86_nops[5], X86_PATCH_SIZE); Signed-off-by: Ingo Molnar <mingo@kernel.org>
2021-03-15objtool/x86: Use asm/nops.hPeter Zijlstra2-5/+9
Since the kernel will rely on a single canonical set of NOPs, make sure objtool uses the exact same ones. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20210312115749.136357911@infradead.org
2021-03-15tools/objtool: Convert to insn_decode()Borislav Petkov1-5/+4
Simplify code, no functional changes. Signed-off-by: Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/20210304174237.31945-18-bp@alien8.de
2021-03-15x86/insn: Add a __ignore_sync_check__ markerBorislav Petkov1-4/+13
Add an explicit __ignore_sync_check__ marker which will be used to mark lines which are supposed to be ignored by file synchronization check scripts, its advantage being that it explicitly denotes such lines in the code. Signed-off-by: Borislav Petkov <bp@suse.de> Reviewed-by: Masami Hiramatsu <mhiramat@kernel.org> Link: https://lkml.kernel.org/r/20210304174237.31945-4-bp@alien8.de
2021-03-12objtool,x86: Fix uaccess PUSHF/POPF validationPeter Zijlstra1-0/+3
Commit ab234a260b1f ("x86/pv: Rework arch_local_irq_restore() to not use popf") replaced "push %reg; popf" with something like: "test $0x200, %reg; jz 1f; sti; 1:", which breaks the pushf/popf symmetry that commit ea24213d8088 ("objtool: Add UACCESS validation") relies on. The result is: drivers/gpu/drm/amd/amdgpu/si.o: warning: objtool: si_common_hw_init()+0xf36: PUSHF stack exhausted Meanwhile, commit c9c324dc22aa ("objtool: Support stack layout changes in alternatives") makes that we can actually use stack-ops in alternatives, which means we can revert 1ff865e343c2 ("x86,smap: Fix smap_{save,restore}() alternatives"). That in turn means we can limit the PUSHF/POPF handling of ea24213d8088 to those instructions that are in alternatives. Fixes: ab234a260b1f ("x86/pv: Rework arch_local_irq_restore() to not use popf") Reported-by: Borislav Petkov <bp@alien8.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Acked-by: Josh Poimboeuf <jpoimboe@redhat.com> Link: https://lkml.kernel.org/r/YEY4rIbQYa5fnnEp@hirez.programming.kicks-ass.net
2021-03-06objtool: Parse options from OBJTOOL_ARGSPeter Zijlstra1-0/+25
Teach objtool to parse options from the OBJTOOL_ARGS environment variable. This enables things like: $ OBJTOOL_ARGS="--backup" make O=defconfig-build/ kernel/ponies.o to obtain both defconfig-build/kernel/ponies.o{,.orig} and easily inspect what objtool actually did. Suggested-by: Borislav Petkov <bp@alien8.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Acked-by: Josh Poimboeuf <jpoimboe@redhat.com> Link: https://lkml.kernel.org/r/20210226110004.252553847@infradead.org
2021-03-06objtool: Collate parse_options() usersPeter Zijlstra3-9/+12
Ensure there's a single place that parses check_options, in preparation for extending where to get options from. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Acked-by: Josh Poimboeuf <jpoimboe@redhat.com> Link: https://lkml.kernel.org/r/20210226110004.193108106@infradead.org
2021-03-06objtool: Add --backupPeter Zijlstra3-2/+69
Teach objtool to write backups files, such that it becomes easier to see what objtool did to the object file. Backup files will be ${name}.orig. Suggested-by: Borislav Petkov <bp@alien8.de> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Acked-by: Borislav Petkov <bp@suse.de> Acked-by: Josh Poimboeuf <jpoimboe@redhat.com> Link: https://lkml.kernel.org/r/YD4obT3aoXPWl7Ax@hirez.programming.kicks-ass.net
2021-03-06objtool,x86: More ModRM sugarPeter Zijlstra1-11/+17
Better helpers to decode ModRM. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Acked-by: Josh Poimboeuf <jpoimboe@redhat.com> Link: https://lkml.kernel.org/r/YCZB/ljatFXqQbm8@hirez.programming.kicks-ass.net
2021-03-06objtool,x86: Rewrite ADD/SUB/ANDPeter Zijlstra1-19/+51
Support sign extending and imm8 forms. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Acked-by: Josh Poimboeuf <jpoimboe@redhat.com> Tested-by: Nick Desaulniers <ndesaulniers@google.com> Link: https://lkml.kernel.org/r/20210211173627.588366777@infradead.org
2021-03-06objtool,x86: Support %riz encodingsPeter Zijlstra1-19/+48
When there's a SIB byte, the register otherwise denoted by r/m will then be denoted by SIB.base REX.b will now extend this. SIB.index == SP is magic and notes an index value zero. This means that there's a bunch of alternative (longer) encodings for the same thing. Eg. 'ModRM.mod != 3, ModRM.r/m = AX' can be encoded as 'ModRM.mod != 3, ModRM.r/m = SP, SIB.base = AX, SIB.index = SP' which is actually 4 different encodings because the value of SIB.scale is irrelevant, giving rise to 5 different but equal encodings. Support these encodings and clean up the SIB handling in general. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Acked-by: Josh Poimboeuf <jpoimboe@redhat.com> Tested-by: Nick Desaulniers <ndesaulniers@google.com> Link: https://lkml.kernel.org/r/20210211173627.472967498@infradead.org
2021-03-06objtool,x86: Simplify register decodePeter Zijlstra1-40/+39
Since the CFI_reg number now matches the instruction encoding order do away with the op_to_cfi_reg[] and use direct assignment. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Acked-by: Josh Poimboeuf <jpoimboe@redhat.com> Tested-by: Nick Desaulniers <ndesaulniers@google.com> Link: https://lkml.kernel.org/r/20210211173627.362004522@infradead.org
2021-03-06objtool,x86: Rewrite LEAVEPeter Zijlstra3-26/+13
Since we can now have multiple stack-ops per instruction, we don't need to special case LEAVE and can simply emit the composite operations. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Acked-by: Josh Poimboeuf <jpoimboe@redhat.com> Tested-by: Nick Desaulniers <ndesaulniers@google.com> Link: https://lkml.kernel.org/r/20210211173627.253273977@infradead.org
2021-03-06objtool,x86: Rewrite LEA decodePeter Zijlstra1-58/+28
Current LEA decoding is a bunch of special cases, properly decode the instruction, with exception of full SIB and RIP-relative modes. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Acked-by: Josh Poimboeuf <jpoimboe@redhat.com> Tested-by: Nick Desaulniers <ndesaulniers@google.com> Link: https://lkml.kernel.org/r/20210211173627.143250641@infradead.org
2021-03-06objtool,x86: Renumber CFI_regPeter Zijlstra1-6/+6
Make them match the instruction encoding numbering. Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Acked-by: Josh Poimboeuf <jpoimboe@redhat.com> Tested-by: Nick Desaulniers <ndesaulniers@google.com> Link: https://lkml.kernel.org/r/20210211173627.033720313@infradead.org
2021-03-06objtool: Allow UNWIND_HINT to suppress dodgy stack modificationsPeter Zijlstra1-6/+9
rewind_stack_do_exit() UNWIND_HINT_FUNC /* Prevent any naive code from trying to unwind to our caller. */ xorl %ebp, %ebp movq PER_CPU_VAR(cpu_current_top_of_stack), %rax leaq -PTREGS_SIZE(%rax), %rsp UNWIND_HINT_REGS call do_exit Does unspeakable things to the stack, which objtool currently fails to detect due to a limitation in instruction decoding. This will be rectified after which the above will result in: arch/x86/entry/entry_64.o: warning: objtool: .text+0xab: unsupported stack register modification Allow the UNWIND_HINT on the next instruction to suppress this, it will overwrite the state anyway. Suggested-by: Josh Poimboeuf <jpoimboe@redhat.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Acked-by: Josh Poimboeuf <jpoimboe@redhat.com> Tested-by: Nick Desaulniers <ndesaulniers@google.com> Link: https://lkml.kernel.org/r/20210211173626.918498579@infradead.org
2021-02-24Merge tag 'x86-entry-2021-02-24' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tipLinus Torvalds1-0/+14
Pull x86 irq entry updates from Thomas Gleixner: "The irq stack switching was moved out of the ASM entry code in course of the entry code consolidation. It ended up being suboptimal in various ways. This reworks the X86 irq stack handling: - Make the stack switching inline so the stackpointer manipulation is not longer at an easy to find place. - Get rid of the unnecessary indirect call. - Avoid the double stack switching in interrupt return and reuse the interrupt stack for softirq handling. - A objtool fix for CONFIG_FRAME_POINTER=y builds where it got confused about the stack pointer manipulation" * tag 'x86-entry-2021-02-24' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: objtool: Fix stack-swizzle for FRAME_POINTER=y um: Enforce the usage of asm-generic/softirq_stack.h x86/softirq/64: Inline do_softirq_own_stack() softirq: Move do_softirq_own_stack() to generic asm header softirq: Move __ARCH_HAS_DO_SOFTIRQ to Kconfig x86: Select CONFIG_HAVE_IRQ_EXIT_ON_IRQ_STACK x86/softirq: Remove indirection in do_softirq_own_stack() x86/entry: Use run_sysvec_on_irqstack_cond() for XEN upcall x86/entry: Convert device interrupts to inline stack switching x86/entry: Convert system vectors to irq stack macro x86/irq: Provide macro for inlining irq stack switching x86/apic: Split out spurious handling code x86/irq/64: Adjust the per CPU irq stack pointer by 8 x86/irq: Sanitize irq stack tracking x86/entry: Fix instrumentation annotation
2021-02-24kasan: prefix global functions with kasan_Andrey Konovalov1-1/+1
Patch series "kasan: HW_TAGS tests support and fixes", v4. This patchset adds support for running KASAN-KUnit tests with the hardware tag-based mode and also contains a few fixes. This patch (of 15): There's a number of internal KASAN functions that are used across multiple source code files and therefore aren't marked as static inline. To avoid littering the kernel function names list with generic function names, prefix all such KASAN functions with kasan_. As a part of this change: - Rename internal (un)poison_range() to kasan_(un)poison() (no _range) to avoid name collision with a public kasan_unpoison_range(). - Rename check_memory_region() to kasan_check_range(), as it's a more fitting name. Link: https://lkml.kernel.org/r/cover.1610733117.git.andreyknvl@google.com Link: https://linux-review.googlesource.com/id/I719cc93483d4ba288a634dba80ee6b7f2809cd26 Link: https://lkml.kernel.org/r/13777aedf8d3ebbf35891136e1f2287e2f34aaba.1610733117.git.andreyknvl@google.com Signed-off-by: Andrey Konovalov <andreyknvl@google.com> Suggested-by: Marco Elver <elver@google.com> Reviewed-by: Marco Elver <elver@google.com> Reviewed-by: Alexander Potapenko <glider@google.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Vincenzo Frascino <vincenzo.frascino@arm.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Will Deacon <will.deacon@arm.com> Cc: Andrey Ryabinin <aryabinin@virtuozzo.com> Cc: Peter Collingbourne <pcc@google.com> Cc: Evgenii Stepanov <eugenis@google.com> Cc: Branislav Rankov <Branislav.Rankov@arm.com> Cc: Kevin Brodsky <kevin.brodsky@arm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2021-02-23Merge tag 'clang-lto-v5.12-rc1-part2' of git://git.kernel.org/pub/scm/linux/kernel/git/kees/linuxLinus Torvalds6-8/+104
Pull more clang LTO updates from Kees Cook: "Clang LTO x86 enablement. Full disclosure: while this has _not_ been in linux-next (since it initially looked like the objtool dependencies weren't going to make v5.12), it has been under daily build and runtime testing by Sami for quite some time. These x86 portions have been discussed on lkml, with Peter, Josh, and others helping nail things down. The bulk of the changes are to get objtool working happily. The rest of the x86 enablement is very small. Summary: - Generate __mcount_loc in objtool (Peter Zijlstra) - Support running objtool against vmlinux.o (Sami Tolvanen) - Clang LTO enablement for x86 (Sami Tolvanen)" Link: https://lore.kernel.org/lkml/20201013003203.4168817-26-samitolvanen@google.com/ Link: https://lore.kernel.org/lkml/cover.1611263461.git.jpoimboe@redhat.com/ * tag 'clang-lto-v5.12-rc1-part2' of git://git.kernel.org/pub/scm/linux/kernel/git/kees/linux: kbuild: lto: force rebuilds when switching CONFIG_LTO x86, build: allow LTO to be selected x86, cpu: disable LTO for cpu.c x86, vdso: disable LTO only for vDSO kbuild: lto: postpone objtool objtool: Split noinstr validation from --vmlinux x86, build: use objtool mcount tracing: add support for objtool mcount objtool: Don't autodetect vmlinux.o objtool: Fix __mcount_loc generation with Clang's assembler objtool: Add a pass for generating __mcount_loc
2021-02-23objtool: Split noinstr validation from --vmlinuxSami Tolvanen3-3/+4
This change adds a --noinstr flag to objtool to allow us to specify that we're processing vmlinux.o without also enabling noinstr validation. This is needed to avoid false positives with LTO when we run objtool on vmlinux.o without CONFIG_DEBUG_ENTRY. Signed-off-by: Sami Tolvanen <samitolvanen@google.com>
2021-02-23objtool: Don't autodetect vmlinux.oSami Tolvanen1-5/+1
With LTO, we run objtool on vmlinux.o, but don't want noinstr validation. This change requires --vmlinux to be passed to objtool explicitly. Suggested-by: Peter Zijlstra <peterz@infradead.org> Signed-off-by: Sami Tolvanen <samitolvanen@google.com> Reviewed-by: Kees Cook <keescook@chromium.org>